[Yahoo-eng-team] [Bug 1385140] [NEW] Endpoint does not support RPC version 3.33

2014-10-24 Thread Shikanda
Public bug reported:

Hi, I am running openstack juno on ubuntu 14.04.1lts and whenever I
launch an instance using nova they get stuck in bulid state.

The logs for nova-compute.log:

  2014-10-24 12:11:02.611 1523 ERROR oslo.messaging.rpc.dispatcher [-] 
Exception during message handling: Endpoint does not support RPC version 3.33
2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 133, 
in _dispatch_and_reply
2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 185, 
in _dispatch
2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher raise 
UnsupportedVersion(version)
2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher 
UnsupportedVersion: Endpoint does not support RPC version 3.33
2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher 
2014-10-24 12:11:02.612 1523 ERROR oslo.messaging._drivers.common [-] Returning 
exception Endpoint does not support RPC version 3.33 to caller
2014-10-24 12:11:02.612 1523 ERROR oslo.messaging._drivers.common [-] 
['Traceback (most recent call last):\n', '  File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 133, 
in _dispatch_and_reply\nincoming.message))\n', '  File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 185, 
in _dispatch\nraise UnsupportedVersion(version)\n', 'UnsupportedVersion: 
Endpoint does not support RPC version 3.33\n']
2014-10-24 12:11:22.839 1523 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources
2014-10-24 12:11:23.212 1523 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 15496
2014-10-24 12:11:23.213 1523 AUDIT nova.compute.resource_tracker [-] Free disk 
(GB): 534
2014-10-24 12:11:23.213 1523 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 4
2014-10-24 12:11:23.245 1523 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for compute1:compute1

Any ideas on how to get past this hurdle?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1385140

Title:
   Endpoint does not support RPC version 3.33

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hi, I am running openstack juno on ubuntu 14.04.1lts and whenever I
  launch an instance using nova they get stuck in bulid state.

  The logs for nova-compute.log:

2014-10-24 12:11:02.611 1523 ERROR oslo.messaging.rpc.dispatcher [-] 
Exception during message handling: Endpoint does not support RPC version 3.33
  2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 133, 
in _dispatch_and_reply
  2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 185, 
in _dispatch
  2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher raise 
UnsupportedVersion(version)
  2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher 
UnsupportedVersion: Endpoint does not support RPC version 3.33
  2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher 
  2014-10-24 12:11:02.612 1523 ERROR oslo.messaging._drivers.common [-] 
Returning exception Endpoint does not support RPC version 3.33 to caller
  2014-10-24 12:11:02.612 1523 ERROR oslo.messaging._drivers.common [-] 
['Traceback (most recent call last):\n', '  File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 133, 
in _dispatch_and_reply\nincoming.message))\n', '  File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 185, 
in _dispatch\nraise UnsupportedVersion(version)\n', 'UnsupportedVersion: 
Endpoint does not support RPC version 3.33\n']
  2014-10-24 12:11:22.839 1523 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources
  2014-10-24 12:11:23.212 1523 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 15496
  2014-10-24 12:11:23.213 1523 AUDIT nova.compute.resource_tracker [-] Free 
disk (GB): 534
  2014-10-24 12:11:23.213 1523 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 4
  2014-10-24 12:11:23.245 1523 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for compute1:compute1

  Any ideas on how to get past 

[Yahoo-eng-team] [Bug 1385140] Re: Endpoint does not support RPC version 3.33

2014-10-24 Thread Oleg Bondarev
** Project changed: neutron = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385140

Title:
   Endpoint does not support RPC version 3.33

Status in OpenStack Compute (Nova):
  New

Bug description:
  Hi, I am running openstack juno on ubuntu 14.04.1lts and whenever I
  launch an instance using nova they get stuck in bulid state.

  The logs for nova-compute.log:

2014-10-24 12:11:02.611 1523 ERROR oslo.messaging.rpc.dispatcher [-] 
Exception during message handling: Endpoint does not support RPC version 3.33
  2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 133, 
in _dispatch_and_reply
  2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 185, 
in _dispatch
  2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher raise 
UnsupportedVersion(version)
  2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher 
UnsupportedVersion: Endpoint does not support RPC version 3.33
  2014-10-24 12:11:02.611 1523 TRACE oslo.messaging.rpc.dispatcher 
  2014-10-24 12:11:02.612 1523 ERROR oslo.messaging._drivers.common [-] 
Returning exception Endpoint does not support RPC version 3.33 to caller
  2014-10-24 12:11:02.612 1523 ERROR oslo.messaging._drivers.common [-] 
['Traceback (most recent call last):\n', '  File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 133, 
in _dispatch_and_reply\nincoming.message))\n', '  File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 185, 
in _dispatch\nraise UnsupportedVersion(version)\n', 'UnsupportedVersion: 
Endpoint does not support RPC version 3.33\n']
  2014-10-24 12:11:22.839 1523 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources
  2014-10-24 12:11:23.212 1523 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 15496
  2014-10-24 12:11:23.213 1523 AUDIT nova.compute.resource_tracker [-] Free 
disk (GB): 534
  2014-10-24 12:11:23.213 1523 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 4
  2014-10-24 12:11:23.245 1523 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for compute1:compute1

  Any ideas on how to get past this hurdle?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1385140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385213] [NEW] Swift multi-tenant store: upload broken

2014-10-24 Thread Stuart McLaren
Public bug reported:

glance image-upload xxx

returns E500 when using the swift multi-tenant backend

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1385213

Title:
  Swift multi-tenant store: upload broken

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  glance image-upload xxx

  returns E500 when using the swift multi-tenant backend

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1385213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385225] [NEW] typo in the policy.json rule_admin_api

2014-10-24 Thread Attila Fazekas
Public bug reported:

http://logstash.openstack.org/#eyJzZWFyY2giOiInRmFpbGVkIHRvIHVuZGVyc3RhbmQgcnVsZSBydWxlX2FkbWluX2FwaScgQU5EIHRhZ3M6XCJzY3JlZW4tbi1jcHUudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImFsbCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTQxNDg2Nzk5MjMsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

[-] Failed to understand rule rule_admin_api
2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy Traceback 
(most recent call last):
2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy   File 
/opt/stack/new/nova/nova/openstack/common/policy.py, line 533, in _parse_check
2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy kind, 
match = rule.split(':', 1)
2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy ValueError: 
need more than 1 value to unpack
2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy 

https://github.com/openstack/nova/blob/e53cb39c298d84a6a8c505c70bf7ceff43173947/etc/nova/policy.json#L165

** Affects: nova
 Importance: Undecided
 Assignee: Attila Fazekas (afazekas)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385225

Title:
  typo in the policy.json  rule_admin_api

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiInRmFpbGVkIHRvIHVuZGVyc3RhbmQgcnVsZSBydWxlX2FkbWluX2FwaScgQU5EIHRhZ3M6XCJzY3JlZW4tbi1jcHUudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImFsbCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTQxNDg2Nzk5MjMsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

[-] Failed to understand rule rule_admin_api
  2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy Traceback 
(most recent call last):
  2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy   File 
/opt/stack/new/nova/nova/openstack/common/policy.py, line 533, in _parse_check
  2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy kind, 
match = rule.split(':', 1)
  2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy ValueError: 
need more than 1 value to unpack
  2014-10-24 10:59:53.921 23349 TRACE nova.openstack.common.policy 

  
https://github.com/openstack/nova/blob/e53cb39c298d84a6a8c505c70bf7ceff43173947/etc/nova/policy.json#L165

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1385225/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385244] [NEW] [Heat] Inappropriate color for stack resources rows

2014-10-24 Thread Tatiana Ovchinnikova
Public bug reported:

In spite of the fact that status of stack resource is Create complete,
the row color remains yellow instead of grey.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1385244

Title:
  [Heat] Inappropriate color for stack resources rows

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In spite of the fact that status of stack resource is Create
  complete, the row color remains yellow instead of grey.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1385244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385246] [NEW] Catch specific exceptions for _get_instance_nw_info

2014-10-24 Thread Gary Kotton
Public bug reported:

Occasionally see the following logs:

2014-10-19 17:29:54.170 27466 ERROR nova.compute.manager [-] [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] An error occurred while refreshing the 
network cache.
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] Traceback (most recent call last):
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
/opt/stack/nova/nova/compute/manager.py, line 5327, in 
_heal_instance_info_cache
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] self._get_instance_nw_info(context, 
instance, use_slave=True)
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
/opt/stack/nova/nova/compute/manager.py, line 1233, in _get_instance_nw_info
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] instance)
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
/opt/stack/nova/nova/network/api.py, line 48, in wrapped
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] return func(self, context, *args, 
**kwargs)
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
/opt/stack/nova/nova/network/api.py, line 374, in get_instance_nw_info
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] result = 
self._get_instance_nw_info(context, instance)
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
/opt/stack/nova/nova/network/api.py, line 390, in _get_instance_nw_info
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] nw_info = 
self.network_rpcapi.get_instance_nw_info(context, **args)
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
/opt/stack/nova/nova/network/rpcapi.py, line 242, in get_instance_nw_info
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] host=host, project_id=project_id)
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py, line 
152, in call
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] retry=self.retry)
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/transport.py, line 90, 
in _send
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] timeout=timeout, retry=retry)
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, 
line 408, in send
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] retry=retry)
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, 
line 399, in _send
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] raise result
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] InstanceNotFound_Remote: Instance 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1 could not be found.
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] Traceback (most recent call last):
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] 
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
/opt/stack/nova/nova/conductor/manager.py, line 400, in _object_dispatch
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] return getattr(target, 
method)(context, *args, **kwargs)
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] 
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1]   File 
/opt/stack/nova/nova/objects/base.py, line 155, in wrapper
2014-10-19 17:29:54.170 27466 TRACE nova.compute.manager [instance: 
dd50ac93-adf2-4e9d-915f-d8d1527d82e1] result = fn(cls, 

[Yahoo-eng-team] [Bug 1384555] Re: Openstack Neutron Database error on filling database

2014-10-24 Thread Ann Kamyshnikova
** Changed in: neutron
   Status: Confirmed = Invalid

** Changed in: neutron
   Status: Invalid = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384555

Title:
  Openstack Neutron Database error on filling database

Status in OpenStack Neutron (virtual network service):
  Opinion
Status in “neutron” package in Ubuntu:
  New

Bug description:
  On a fresh installation of Juno, it seems that that the database is
  not being populated correctly on a fresh install. This is the output
  of the log (I also demonstrated the DB had no tables to begin with):

  MariaDB [(none)] use neutron
  Database changed
  MariaDB [neutron] show tables;
  Empty set (0.00 sec)

  MariaDB [neutron] quit
  Bye
  root@vm-1:~# neutron-db-manage --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/plugin.ini current
  INFO  [alembic.migration] Context impl MySQLImpl.
  INFO  [alembic.migration] Will assume non-transactional DDL.
  Current revision for mysql://neutron:X@10.10.10.1/neutron: None
  root@vm-1:~# neutron-db-manage --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/plugin.ini upgrade head
  INFO  [alembic.migration] Context impl MySQLImpl.
  INFO  [alembic.migration] Will assume non-transactional DDL.
  INFO  [alembic.migration] Running upgrade None - havana, havana_initial
  INFO  [alembic.migration] Running upgrade havana - e197124d4b9, add unique 
constraint to members
  INFO  [alembic.migration] Running upgrade e197124d4b9 - 1fcfc149aca4, Add a 
unique constraint on (agent_type, host) columns to prevent a race condition 
when an agent entry is 'upserted'.
  INFO  [alembic.migration] Running upgrade 1fcfc149aca4 - 50e86cb2637a, 
nsx_mappings
  INFO  [alembic.migration] Running upgrade 50e86cb2637a - 1421183d533f, NSX 
DHCP/metadata support
  INFO  [alembic.migration] Running upgrade 1421183d533f - 3d3cb89d84ee, 
nsx_switch_mappings
  INFO  [alembic.migration] Running upgrade 3d3cb89d84ee - 4ca36cfc898c, 
nsx_router_mappings
  INFO  [alembic.migration] Running upgrade 4ca36cfc898c - 27cc183af192, 
ml2_vnic_type
  INFO  [alembic.migration] Running upgrade 27cc183af192 - 50d5ba354c23, ml2 
binding:vif_details
  INFO  [alembic.migration] Running upgrade 50d5ba354c23 - 157a5d299379, ml2 
binding:profile
  INFO  [alembic.migration] Running upgrade 157a5d299379 - 3d2585038b95, 
VMware NSX rebranding
  INFO  [alembic.migration] Running upgrade 3d2585038b95 - abc88c33f74f, lb 
stats
  INFO  [alembic.migration] Running upgrade abc88c33f74f - 1b2580001654, 
nsx_sec_group_mapping
  INFO  [alembic.migration] Running upgrade 1b2580001654 - e766b19a3bb, 
nuage_initial
  INFO  [alembic.migration] Running upgrade e766b19a3bb - 2eeaf963a447, 
floatingip_status
  INFO  [alembic.migration] Running upgrade 2eeaf963a447 - 492a106273f8, 
Brocade ML2 Mech. Driver
  INFO  [alembic.migration] Running upgrade 492a106273f8 - 24c7ea5160d7, Cisco 
CSR VPNaaS
  INFO  [alembic.migration] Running upgrade 24c7ea5160d7 - 81c553f3776c, 
bsn_consistencyhashes
  INFO  [alembic.migration] Running upgrade 81c553f3776c - 117643811bca, nec: 
delete old ofc mapping tables
  INFO  [alembic.migration] Running upgrade 117643811bca - 19180cf98af6, 
nsx_gw_devices
  INFO  [alembic.migration] Running upgrade 19180cf98af6 - 33dd0a9fa487, 
embrane_lbaas_driver
  INFO  [alembic.migration] Running upgrade 33dd0a9fa487 - 2447ad0e9585, Add 
IPv6 Subnet properties
  INFO  [alembic.migration] Running upgrade 2447ad0e9585 - 538732fa21e1, NEC 
Rename quantum_id to neutron_id
  INFO  [alembic.migration] Running upgrade 538732fa21e1 - 5ac1c354a051, n1kv 
segment allocs for cisco n1kv plugin
  INFO  [alembic.migration] Running upgrade 5ac1c354a051 - icehouse, icehouse
  INFO  [alembic.migration] Running upgrade icehouse - 54f7549a0e5f, 
set_not_null_peer_address
  INFO  [alembic.migration] Running upgrade 54f7549a0e5f - 1e5dd1d09b22, 
set_not_null_fields_lb_stats
  INFO  [alembic.migration] Running upgrade 1e5dd1d09b22 - b65aa907aec, 
set_length_of_protocol_field
  INFO  [alembic.migration] Running upgrade b65aa907aec - 33c3db036fe4, 
set_length_of_description_field_metering
  INFO  [alembic.migration] Running upgrade 33c3db036fe4 - 4eca4a84f08a, 
Remove ML2 Cisco Credentials DB
  INFO  [alembic.migration] Running upgrade 4eca4a84f08a - d06e871c0d5, 
set_admin_state_up_not_null_ml2
  INFO  [alembic.migration] Running upgrade d06e871c0d5 - 6be312499f9, 
set_not_null_vlan_id_cisco
  INFO  [alembic.migration] Running upgrade 6be312499f9 - 1b837a7125a9, Cisco 
APIC Mechanism Driver
  INFO  [alembic.migration] Running upgrade 1b837a7125a9 - 10cd28e692e9, 
nuage_extraroute
  INFO  [alembic.migration] Running upgrade 10cd28e692e9 - 2db5203cb7a9, 
nuage_floatingip
  INFO  [alembic.migration] Running upgrade 2db5203cb7a9 - 5446f2a45467, 
set_server_default
  INFO  [alembic.migration] Running upgrade 5446f2a45467 - db_healing, 

[Yahoo-eng-team] [Bug 1385257] [NEW] Scary Cannot add or update a child row: a foreign key constraint fails ERROR

2014-10-24 Thread Attila Fazekas
Public bug reported:

I see similar message in non dvr setups as mentioned in #1371696.

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQ2Fubm90IGFkZCBvciB1cGRhdGUgYSBjaGlsZCByb3dcXDogYSBmb3JlaWduIGtleSBjb25zdHJhaW50IGZhaWxzXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTQxNTQ0NTgzNDcsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=


2014-10-24 12:01:09.517 1409 TRACE neutron.agent.l3_agent RemoteError: Remote 
error: DBReferenceError (IntegrityError) (1452, 'Cannot add or update a child 
row: a foreign key constraint fails (`neutron`.`routerl3agentbindings`, 
CONSTRAINT `routerl3agentbindings_ibfk_2` FOREIGN KEY (`router_id`) REFERENCES 
`routers` (`id`) ON DELETE CASCADE)') 'INSERT INTO routerl3agentbindings 
(router_id, l3_agent_id) VALUES (%s, %s)' 
('63e69dd6-2964-42a2-ad67-9e7048c044e8', '24b9520c-0543-4968-9b2c-f4e86c5d26e4')

The ERROR most likely harmless, but very annoying when searching for a real 
issue.
Only a shorter message with lower error level (DEBUG)  should be logged on a 
concurrent delete event.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1385257

Title:
  Scary Cannot add or update a child row: a foreign key constraint
  fails ERROR

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I see similar message in non dvr setups as mentioned in #1371696.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQ2Fubm90IGFkZCBvciB1cGRhdGUgYSBjaGlsZCByb3dcXDogYSBmb3JlaWduIGtleSBjb25zdHJhaW50IGZhaWxzXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTQxNTQ0NTgzNDcsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

  
  2014-10-24 12:01:09.517 1409 TRACE neutron.agent.l3_agent RemoteError: Remote 
error: DBReferenceError (IntegrityError) (1452, 'Cannot add or update a child 
row: a foreign key constraint fails (`neutron`.`routerl3agentbindings`, 
CONSTRAINT `routerl3agentbindings_ibfk_2` FOREIGN KEY (`router_id`) REFERENCES 
`routers` (`id`) ON DELETE CASCADE)') 'INSERT INTO routerl3agentbindings 
(router_id, l3_agent_id) VALUES (%s, %s)' 
('63e69dd6-2964-42a2-ad67-9e7048c044e8', '24b9520c-0543-4968-9b2c-f4e86c5d26e4')

  The ERROR most likely harmless, but very annoying when searching for a real 
issue.
  Only a shorter message with lower error level (DEBUG)  should be logged on a 
concurrent delete event.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1385257/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385256] [NEW] Distribution pie charts colors need to be re-arranged

2014-10-24 Thread Ana Krivokapić
Public bug reported:

Since the shades of blue used in distribution pie charts are arranged
from dark to light, it is difficult to distinguish between the different
shades in a chart (see attachment). The colors should be arranged
differently (light and dark shades should be next to each other).

** Affects: horizon
 Importance: Undecided
 Assignee: Ana Krivokapić (akrivoka)
 Status: New

** Attachment added: Screenshot from 2014-10-24 14:41:16.png
   
https://bugs.launchpad.net/bugs/1385256/+attachment/4243565/+files/Screenshot%20from%202014-10-24%2014%3A41%3A16.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1385256

Title:
  Distribution pie charts colors need to be re-arranged

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Since the shades of blue used in distribution pie charts are arranged
  from dark to light, it is difficult to distinguish between the
  different shades in a chart (see attachment). The colors should be
  arranged differently (light and dark shades should be next to each
  other).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1385256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385266] [NEW] Too many 'Enforcing rule' logged on the gate

2014-10-24 Thread Attila Fazekas
Public bug reported:

After #1356679 the debug logging became too verbose.
Logging the policy rule checking is useful when you are editing the 
policy.json, but for general usage it is too verbose.

http://logs.openstack.org/53/130753/1/check/check-tempest-dsvm-neutron-
full/4ad9772/logs/screen-q-svc.txt.gz#_2014-10-24_11_44_41_049

$ wget 
http://logs.openstack.org/53/130753/1/check/check-tempest-dsvm-neutron-full/4ad9772/logs/screen-q-svc.txt.gz
$ wc -l screen-q-svc.txt.gz
94283 screen-q-svc.txt.gz
$ grep 'Enforcing rule' screen-q-svc.txt.gz | wc -l
25320

26.85% of log lines contains the 'Enforcing rule'.

These  'Enforcing rule' log messages should be disabled by default (even
with debug=True).

Note:
Maybe 'default_log_levels'  can be used for make it less verbose by default.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1385266

Title:
  Too many 'Enforcing rule' logged on the gate

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  After #1356679 the debug logging became too verbose.
  Logging the policy rule checking is useful when you are editing the 
policy.json, but for general usage it is too verbose.

  http://logs.openstack.org/53/130753/1/check/check-tempest-dsvm-
  neutron-full/4ad9772/logs/screen-q-svc.txt.gz#_2014-10-24_11_44_41_049

  $ wget 
http://logs.openstack.org/53/130753/1/check/check-tempest-dsvm-neutron-full/4ad9772/logs/screen-q-svc.txt.gz
  $ wc -l screen-q-svc.txt.gz
  94283 screen-q-svc.txt.gz
  $ grep 'Enforcing rule' screen-q-svc.txt.gz | wc -l
  25320

  26.85% of log lines contains the 'Enforcing rule'.

  These  'Enforcing rule' log messages should be disabled by default
  (even with debug=True).

  Note:
  Maybe 'default_log_levels'  can be used for make it less verbose by default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1385266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340793] Re: DB2 deadlock error not detected

2014-10-24 Thread Victor Sergeyev
Glance is using oslo.db, so if it's fixed there then it's fixed in
glance as well.

** Changed in: glance
   Status: New = Invalid

** Changed in: heat
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340793

Title:
  DB2 deadlock error not detected

Status in Cinder:
  Confirmed
Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid
Status in Orchestration API (Heat):
  Invalid
Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in The Oslo library incubator:
  Fix Released

Bug description:
  Currently, only mysql and postgresql deadlock errors are properly handled.
  The error message for DB2 looks like:

  'SQL0911N  The current transaction has been rolled back because of a
  deadlock or timeout.  deadlock details'

  Olso.db needs to include a regex to detect this deadlock. Essentially the 
same as
  https://bugs.launchpad.net/nova/+bug/1270725
  but for DB2

  This is an example error:

  2014-07-01 19:52:16.574 2710 TRACE
  nova.openstack.common.db.sqlalchemy.session ProgrammingError:
  (ProgrammingError) ibm_db_dbi::ProgrammingError: Statement Execute
  Failed: [IBM][CLI Driver][DB2/LINUXX8664] SQL0911N  The current
  transaction has been rolled back because of a deadlock or timeout.
  Reason code 2.  SQLSTATE=40001 SQLCODE=-911 'UPDATE reservations SET
  updated_at=updated_at, deleted_at=?, deleted=id WHERE
  reservations.deleted = ? AND reservations.uuid IN (?, ?, ?)'
  (datetime.datetime(2014, 7, 1, 23, 52, 10, 774722), 0,
  'e2353f5e-f444-4a94-b7bf-f877402c15ab', 'c4b22c95-284a-4ce3-810b-
  5d9bbe6dd7b7', 'ab0294cb-c317-4594-9b19-911589228aa5')

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1340793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371701] Re: delete middleware module files

2014-10-24 Thread Cedric Brandily
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) = Cedric Brandily (cbrandily)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1371701

Title:
  delete middleware module files

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in The Oslo library incubator:
  Fix Committed

Bug description:
  remove files part of oslo.middleware graduation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1371701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381365] Re: SSL Version and cipher selection not possible

2014-10-24 Thread Jeremy Stanley
** Information type changed from Private Security to Public

** Tags added: security

** Changed in: ossa
   Status: Incomplete = Won't Fix

** CVE removed: http://www.cve.mitre.org/cgi-
bin/cvename.cgi?name=2014-3511

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381365

Title:
  SSL Version and cipher selection not possible

Status in Cinder:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Compute (Nova):
  New
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  We configure keystone to use SSL always. Due to the poodle issue, I was 
trying to configure keystone to disable SSLv3 completely. 
  
http://googleonlinesecurity.blogspot.fi/2014/10/this-poodle-bites-exploiting-ssl-30.html
  https://www.openssl.org/~bodo/ssl-poodle.pdf

  It seems that keystone has no support for configring SSL versions, nor
  ciphers.

  If I'm not mistaken the relevant code is in the start function in
  common/environment/eventlet_server.py

  It calls 
  eventlet.wrap_ssl
  but with no SSL version nor cipher options. Since the interface is identical, 
I assume it uses ssl.wrap_socket. The default here seems to be  PROTOCOL_SSLv23 
(SSL2 disabled), which would make this vulnerable to the poodle issue.

  SSL conifgs should probably be possible to be set in the config file
  (with sane defaults), so that current and newly detected weak ciphers
  can be disabled without code changes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1381365/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379077] Re: Tenants can be created with invalid ids

2014-10-24 Thread Jeremy Stanley
** Information type changed from Private Security to Public

** Tags added: security

** Changed in: ossa
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1379077

Title:
  Tenants can be created with invalid ids

Status in OpenStack Identity (Keystone):
  In Progress
Status in Keystone icehouse series:
  In Progress
Status in Keystone juno series:
  Confirmed
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  When creating a new tenant, there is an optional argument 'id' that
  may be passed:

  
https://github.com/openstack/keystone/blob/9025b64a8f2bf5cf01a18453d6728e081bd2c3b9/keystone/assignment/controllers.py#L114

  If not passed, this just creates a uuid and proceeds.  If a value is
  passed, it will use that value.  So a user with priv's to create a
  tenant can pass something like ../../../../../ as the id.  If this
  is done, then the project can't be deleted without manually removing
  the value from the database. This can lead to a DoS that could fill
  the db and take down the cloud, in the worst of circumstances.

  I believe the proper fix here would be to just remove this feature
  altogether.  But this is because I'm not clear about why we would ever
  want to allow someone to set the id manually.  If there's a valid use
  case here, then we should at least do some input validation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1379077/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests

2014-10-24 Thread Jeremy Stanley
** Information type changed from Private Security to Public

** Tags added: security

** Changed in: ossa
   Importance: High = Undecided

** Changed in: ossa
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361360

Title:
  Eventlet green threads not released back to the pool leading to
  choking of new requests

Status in Cinder:
  In Progress
Status in Cinder icehouse series:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Glance icehouse series:
  New
Status in OpenStack Identity (Keystone):
  In Progress
Status in Keystone icehouse series:
  New
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in neutron icehouse series:
  New
Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) icehouse series:
  New
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Currently reproduced  on Juno milestone 2. but this issue should be
  reproducible in all releases since its inception.

  It is possible to choke OpenStack API controller services using
  wsgi+eventlet library by simply not closing the client socket
  connection. Whenever a request is received by any OpenStack API
  service for example nova api service, eventlet library creates a green
  thread from the pool and starts processing the request. Even after the
  response is sent to the caller, the green thread is not returned back
  to the pool until the client socket connection is closed. This way,
  any malicious user can send many API requests to the API controller
  node and determine the wsgi pool size configured for the given service
  and then send those many requests to the service and after receiving
  the response, wait there infinitely doing nothing leading to
  disrupting services for other tenants. Even when service providers
  have enabled rate limiting feature, it is possible to choke the API
  services with a group (many tenants) attack.

  Following program illustrates choking of nova-api services (but this
  problem is omnipresent in all other OpenStack API Services using
  wsgi+eventlet)

  Note: I have explicitly set the wsi_default_pool_size default value to 10 in 
order to reproduce this problem in nova/wsgi.py.
  After you run the below program, you should try to invoke API
  

  import time
  import requests
  from multiprocessing import Process

  def request(number):
 #Port is important here
 path = 'http://127.0.0.1:8774/servers'
  try:
  response = requests.get(path)
  print RESPONSE %s-%d % (response.status_code, number)
  #during this sleep time, check if the client socket connection is 
released or not on the API controller node.
  time.sleep(1000)
  print “Thread %d complete % number
  except requests.exceptions.RequestException as ex:
  print “Exception occurred %d-%s % (number, str(ex))

  if __name__ == '__main__':
  processes = []
  for number in range(40):
  p = Process(target=request, args=(number,))
  p.start()
  processes.append(p)
  for p in processes:
  p.join()

  


  Presently, the wsgi server allows persist connections if you configure 
keepalive to True which is default.
  In order to close the client socket connection explicitly after the response 
is sent and read successfully by the client, you simply have to set keepalive 
to False when you create a wsgi server.

  Additional information: By default eventlet passes “Connection: keepalive” if 
keepalive is set to True when a response is sent to the client. But it doesn’t 
have capability to set the timeout and max parameter.
  For example.
  Keep-Alive: timeout=10, max=5

  Note: After we have disabled keepalive in all the OpenStack API
  service using wsgi library, then it might impact all existing
  applications built with the assumptions that OpenStack API services
  uses persistent connections. They might need to modify their
  applications if reconnection logic is not in place and also they might
  experience the performance has slowed down as it will need to
  reestablish the http connection for every request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385309] [NEW] missing some translations related to Volume Type

2014-10-24 Thread Doug Fish
Public bug reported:

I'm using Juno/Stable

Several buttons related to Volume types appear untranslated in German,
even though they seem to be present in the po file.

I ran ./run_tests.sh --compilemessages

then changed language to German and navigated to
Administrator-Datenträger(Volumes)-Datenträgertypen(Volume Types)

Note that in the upper table the Delete Volume Types button is not
translated.  In the lower table the title and button are not translated.
And on the QOS Create panel the title and description text are not
translated.

I have verified these work correctly in Spanish,  Japanese, and Chinese.
They seem to be translated in the German PO file, but appear
untranslated in use.

This makes no sense to me.  Does anyone else even see this?

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1385309

Title:
  missing some translations related to Volume Type

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I'm using Juno/Stable

  Several buttons related to Volume types appear untranslated in German,
  even though they seem to be present in the po file.

  I ran ./run_tests.sh --compilemessages

  then changed language to German and navigated to
  Administrator-Datenträger(Volumes)-Datenträgertypen(Volume Types)

  Note that in the upper table the Delete Volume Types button is not
  translated.  In the lower table the title and button are not
  translated.  And on the QOS Create panel the title and description
  text are not translated.

  I have verified these work correctly in Spanish,  Japanese, and
  Chinese.  They seem to be translated in the German PO file, but appear
  untranslated in use.

  This makes no sense to me.  Does anyone else even see this?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1385309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385318] [NEW] Nova fails to add fixed IP

2014-10-24 Thread Vitalii
Public bug reported:

I created instance with one NIC attached.
Then I try to attach another NIC:

nova add-fixed-ip  ServerId NetworkId

Nova compute raises exception:

2014-10-24 15:40:33.925 31955 ERROR oslo.messaging.rpc.dispatcher 
[req-43570a05-937a-4ddf-a0e9-e05d42660817 ] Exception during message handling: 
Network could not be found for instance 09b6e137-37d6-475d-992c-bdcb7d3cb841.
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 134, in _dispatch_and_reply
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 177, in _dispatch
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 123, in _do_dispatch
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 414, in decorated_function
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/exception.py,
 line 88, in wrapped
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher payload)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 82, in __exit__
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/exception.py,
 line 71, in wrapped
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 326, in decorated_function
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 82, in __exit__
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 314, in decorated_function
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 3915, in add_fixed_ip_to_instance
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
network_id)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/network/base_api.py,
 line 61, in wrapper
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher res = 
f(self, context, *args, **kwargs)
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/network/neutronv2/api.py,
 line 684, in add_fixed_ip_to_instance
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
instance_id=instance['uuid'])
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
NetworkNotFoundForInstance: Network could not be found for instance 
09b6e137-37d6-475d-992c-bdcb7d3cb841.
2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 


In nova/network/neutronv2/api.py ther's line:

neutronv2.get_client(context).list_ports(**search_opts)

It cannot find port by device_id. Probably because nova do not create
port ?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.

[Yahoo-eng-team] [Bug 1385355] [NEW] Migrate Neutron to oslo.utils

2014-10-24 Thread Ihar Hrachyshka
Public bug reported:

We need to migrate to oslo.utils in Kilo.

** Affects: neutron
 Importance: Undecided
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1385355

Title:
  Migrate Neutron to oslo.utils

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We need to migrate to oslo.utils in Kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1385355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385356] [NEW] Migrate Neutron to oslo.i18n

2014-10-24 Thread Ihar Hrachyshka
Public bug reported:

We need to migrate to oslo.i18n in Kilo.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1385356

Title:
  Migrate Neutron to oslo.i18n

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We need to migrate to oslo.i18n in Kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1385356/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385353] [NEW] Migrate Neutron to oslo.serialization

2014-10-24 Thread Ihar Hrachyshka
Public bug reported:

We need to migrate to oslo.serialization in Kilo.

** Affects: neutron
 Importance: Undecided
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1385353

Title:
  Migrate Neutron to oslo.serialization

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We need to migrate to oslo.serialization in Kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1385353/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385354] [NEW] Migrate Neutron to oslo.utils

2014-10-24 Thread Ihar Hrachyshka
Public bug reported:

We need to migrate to oslo.utils in Kilo.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1385354

Title:
  Migrate Neutron to oslo.utils

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We need to migrate to oslo.utils in Kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1385354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385357] [NEW] Migrate Neutron to oslo.i18n

2014-10-24 Thread Ihar Hrachyshka
Public bug reported:

We need to migrate to oslo.i18n in Kilo.

** Affects: neutron
 Importance: Undecided
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1385357

Title:
  Migrate Neutron to oslo.i18n

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We need to migrate to oslo.i18n in Kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1385357/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385379] [NEW] Change i18n hint from _ to _LI in neutron code

2014-10-24 Thread Manish Godara
Public bug reported:

Change all of neutron code to use _LI for i18n hint.

** Affects: neutron
 Importance: Undecided
 Assignee: Manish Godara (manishatyhoo)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Manish Godara (manishatyhoo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1385379

Title:
  Change i18n hint from _ to _LI in neutron code

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Change all of neutron code to use _LI for i18n hint.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1385379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367941] Re: Able to aquire the semaphore used in lockutils.synchronized_with_prefix twice at the same time

2014-10-24 Thread Doug Hellmann
** Changed in: oslo.concurrency
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367941

Title:
  Able to aquire the semaphore used in
  lockutils.synchronized_with_prefix twice at the same time

Status in OpenStack Compute (Nova):
  Invalid
Status in The Oslo library incubator:
  Fix Released
Status in Oslo Concurrency Library:
  Fix Released

Bug description:
  In nova-compute the semaphore compute_resources is used  in
  lockutils.synchronized_with_prefix('nova-') as part of
  nova/compute/resource_tracker.py

  The compute_resources  semaphore is acquired once at:

  http://logs.openstack.org/58/117258/2/gate/gate-tempest-dsvm-neutron-
  full/48c8627/logs/screen-n-cpu.txt.gz?#_2014-09-10_20_19_17_176

  And then again at:

  In  http://logs.openstack.org/58/117258/2/gate/gate-tempest-dsvm-
  neutron-
  full/48c8627/logs/screen-n-cpu.txt.gz?#_2014-09-10_20_19_52_234

  without being released in between.  This means
  lockutils.synchronized_with_prefix('nova-') isn't working as expected.

  While https://review.openstack.org/#/c/119586/ is a possible culprit
  for this issue, a spot check of nova-compute logs from before that
  patch was merged show this was happening before (although in my spot
  checking it happened significantly less often, but I only checked one
  file).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385405] [NEW] Domain backed by a populated read-only domain-specific LDAP identity backend cannot be deleted

2014-10-24 Thread Gabriel Assis Bezerra
Public bug reported:

I've set up a DevStack with Keystone using domain-specific backends.

I've then created a Domain-A with its domain-specific configuration
being:

[ldap]
url=ldap://ldap.server.com:389
user=cn=admin,dc=example,dc=com
password=secret
suffix=dc=example,dc=com

user_tree_dn=ou=Users,dc=example,dc=com 
user_id_attribute=cn
user_name_attribute=cn
user_objectclass=organizationalPerson
user_allow_create=false
user_allow_update=false
user_allow_delete=false

group_tree_dn=ou=Groups,dc=example,dc=com
group_id_attribute=cn
group_name_attribute=cn
group_objectclass=*
group_allow_create=false
group_allow_update=false
group_allow_delete=false

[identity]
driver = keystone.identity.backends.ldap.Identity


Now I cannot delete this domain. When I try that, Keystone returns this error:
{error: {message: You are not authorized to perform the requested action: 
LDAP group delete (Disable debug mode to suppress these details.), code: 
403, title: Forbidden}


As I configured it not to allow the information to be deleted by Keystone, I'd 
expect it to ignore the fact that it cannot delete the groups and users and 
then delete the domain.

On the other hand, it is good to have it blocked until the not-needed-
anymore configuration file is removed.


See also the chat below on 2014-10-22 on #openstack-keystone:

14:53:45 gabriel-bezerra | ayoung: I cannot delete a domain that is backed by a 
populated read-only LDAP database. It is a bug, right? (just asking before 
filing)
14:56:11  ayoung | gabriel-bezerra, multi-backend?
14:56:52 gabriel-bezerra | ayoung: yes, domain-specific
14:57:37  ayoung | gabriel-bezerra, what error do you get?  I'm not 
certain its a bug or not.  Suspect a foreign key constraint
14:57:50  ayoung | but you need to disable a domain before deleting no 
matter what
14:58:15 gabriel-bezerra | ayoung: {error: {message: You are not 
authorized to perform the requested action: LDAP group delete (Disable debug 
mode to suppress these details.), code: 403, title: Forbidden}
14:58:39  ayoung | gabriel-bezerra, cuz deleting the domain trys to 
delete all of the objects inside it
14:58:48 gabriel-bezerra | ayoung: it is being disabled
14:59:00  ayoung | You'd have to unmap the domain specific backend part 
first 
14:59:30  ayoung | so remove the file, restart the server,and I bet it 
works...and I think that is as it should be under current ways of thinking
15:00:07 gabriel-bezerra | ayoung: ok. no bug then. thank you.
15:00:21  ayoung | yeah...but maybe something to document
15:00:59  ayoung | gabriel-bezerra, until we make the configuration 
something that can be done on the fly and without restarting the server, I'd 
say it works as designed
15:07:41 gabriel-bezerra | ayoung: I'll file the bug then, just to keep track 
of the issue.
15:07:50  ayoung | ++

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1385405

Title:
  Domain backed by a populated read-only domain-specific LDAP identity
  backend cannot be deleted

Status in OpenStack Identity (Keystone):
  New

Bug description:
  I've set up a DevStack with Keystone using domain-specific backends.

  I've then created a Domain-A with its domain-specific configuration
  being:

  [ldap]
  url=ldap://ldap.server.com:389
  user=cn=admin,dc=example,dc=com
  password=secret
  suffix=dc=example,dc=com

  user_tree_dn=ou=Users,dc=example,dc=com 
  user_id_attribute=cn
  user_name_attribute=cn
  user_objectclass=organizationalPerson
  user_allow_create=false
  user_allow_update=false
  user_allow_delete=false

  group_tree_dn=ou=Groups,dc=example,dc=com
  group_id_attribute=cn
  group_name_attribute=cn
  group_objectclass=*
  group_allow_create=false
  group_allow_update=false
  group_allow_delete=false

  [identity]
  driver = keystone.identity.backends.ldap.Identity

  
  Now I cannot delete this domain. When I try that, Keystone returns this error:
  {error: {message: You are not authorized to perform the requested 
action: LDAP group delete (Disable debug mode to suppress these details.), 
code: 403, title: Forbidden}

  
  As I configured it not to allow the information to be deleted by Keystone, 
I'd expect it to ignore the fact that it cannot delete the groups and users and 
then delete the domain.

  On the other hand, it is good to have it blocked until the not-needed-
  anymore configuration file is removed.

  
  See also the chat below on 2014-10-22 on #openstack-keystone:

  14:53:45 gabriel-bezerra | ayoung: I cannot delete a domain that is backed by 
a populated read-only LDAP database. It is a bug, right? (just asking before 
filing)
  14:56:11  ayoung | gabriel-bezerra, multi-backend?
  14:56:52 gabriel-bezerra | ayoung: yes, domain-specific
  14:57:37  ayoung | 

[Yahoo-eng-team] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests

2014-10-24 Thread Morgan Fainberg
It is recommended Keystone is deployed under mod_wsgi, not eventlet,
this still does affect Keystone's eventlet deployment model.

** Changed in: keystone
   Importance: Undecided = Medium

** Also affects: keystone/juno
   Importance: Undecided
   Status: New

** Changed in: keystone/icehouse
   Importance: Undecided = Medium

** Changed in: keystone/icehouse
   Status: New = Confirmed

** Changed in: keystone/juno
   Importance: Undecided = Medium

** Changed in: keystone/juno
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361360

Title:
  Eventlet green threads not released back to the pool leading to
  choking of new requests

Status in Cinder:
  In Progress
Status in Cinder icehouse series:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Glance icehouse series:
  New
Status in OpenStack Identity (Keystone):
  In Progress
Status in Keystone icehouse series:
  Confirmed
Status in Keystone juno series:
  Confirmed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in neutron icehouse series:
  New
Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) icehouse series:
  New
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Currently reproduced  on Juno milestone 2. but this issue should be
  reproducible in all releases since its inception.

  It is possible to choke OpenStack API controller services using
  wsgi+eventlet library by simply not closing the client socket
  connection. Whenever a request is received by any OpenStack API
  service for example nova api service, eventlet library creates a green
  thread from the pool and starts processing the request. Even after the
  response is sent to the caller, the green thread is not returned back
  to the pool until the client socket connection is closed. This way,
  any malicious user can send many API requests to the API controller
  node and determine the wsgi pool size configured for the given service
  and then send those many requests to the service and after receiving
  the response, wait there infinitely doing nothing leading to
  disrupting services for other tenants. Even when service providers
  have enabled rate limiting feature, it is possible to choke the API
  services with a group (many tenants) attack.

  Following program illustrates choking of nova-api services (but this
  problem is omnipresent in all other OpenStack API Services using
  wsgi+eventlet)

  Note: I have explicitly set the wsi_default_pool_size default value to 10 in 
order to reproduce this problem in nova/wsgi.py.
  After you run the below program, you should try to invoke API
  

  import time
  import requests
  from multiprocessing import Process

  def request(number):
 #Port is important here
 path = 'http://127.0.0.1:8774/servers'
  try:
  response = requests.get(path)
  print RESPONSE %s-%d % (response.status_code, number)
  #during this sleep time, check if the client socket connection is 
released or not on the API controller node.
  time.sleep(1000)
  print “Thread %d complete % number
  except requests.exceptions.RequestException as ex:
  print “Exception occurred %d-%s % (number, str(ex))

  if __name__ == '__main__':
  processes = []
  for number in range(40):
  p = Process(target=request, args=(number,))
  p.start()
  processes.append(p)
  for p in processes:
  p.join()

  


  Presently, the wsgi server allows persist connections if you configure 
keepalive to True which is default.
  In order to close the client socket connection explicitly after the response 
is sent and read successfully by the client, you simply have to set keepalive 
to False when you create a wsgi server.

  Additional information: By default eventlet passes “Connection: keepalive” if 
keepalive is set to True when a response is sent to the client. But it doesn’t 
have capability to set the timeout and max parameter.
  For example.
  Keep-Alive: timeout=10, max=5

  Note: After we have disabled keepalive in all the OpenStack API
  service using wsgi library, then it might impact all existing
  applications built with the assumptions that OpenStack API services
  uses persistent connections. They might need to modify their
  applications if reconnection logic is not in place and also they might
  experience the performance has slowed down as it will need to
  reestablish the http connection for every request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions

-- 

[Yahoo-eng-team] [Bug 1385451] [NEW] Interface does not exist error during delete instance

2014-10-24 Thread Ketan Nilangekar
Public bug reported:

I am seeing the following error during delete of an instance in
compute.log:

014-10-18 23:27:46.509 30501 ERROR nova.virt.libvirt.driver [-] [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c] During wait destroy, instance disappeared.
2014-10-18 23:27:46.628 30501 ERROR nova.virt.libvirt.vif 
[req-6aa96e90-ea3b-4bf8-a546-c1dac69d05e9 5d559800e58849179f414112d7d2d026 
d26c6927ca2249209c6736fe18e16b68] [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c] Failed while unplugging vif
2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c] Traceback (most recent call last):
2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c]   File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/vif.py, line 602, in 
unplug_ovs_hybrid
2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c] utils.execute('brctl', 'delif', 
br_name, v1_name, run_as_root=True)
2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c]   File 
/usr/lib/python2.6/site-packages/nova/utils.py, line 178, in execute
2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c] return processutils.execute(*cmd, 
**kwargs)
2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c]   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py, line 
180, in execute
2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c] cmd=sanitized_cmd)
2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c] ProcessExecutionError: Unexpected error 
while running command.
2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c] Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf brctl delif qbr15cdd549-68 qvb15cdd549-68
2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c] Exit code: 1
2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c] Stdout: u''
2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c] Stderr: u'interface qvb15cdd549-68 does 
not exist!\n'
2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c]

However I see that the veth pair interfaces qv* and qbr* are getting
deleted as expected. I don't see any artifacts from the instance
deletion left over despite this message.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385451

Title:
  Interface does not exist error during delete instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  I am seeing the following error during delete of an instance in
  compute.log:

  014-10-18 23:27:46.509 30501 ERROR nova.virt.libvirt.driver [-] [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c] During wait destroy, instance disappeared.
  2014-10-18 23:27:46.628 30501 ERROR nova.virt.libvirt.vif 
[req-6aa96e90-ea3b-4bf8-a546-c1dac69d05e9 5d559800e58849179f414112d7d2d026 
d26c6927ca2249209c6736fe18e16b68] [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c] Failed while unplugging vif
  2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c] Traceback (most recent call last):
  2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c]   File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/vif.py, line 602, in 
unplug_ovs_hybrid
  2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c] utils.execute('brctl', 'delif', 
br_name, v1_name, run_as_root=True)
  2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c]   File 
/usr/lib/python2.6/site-packages/nova/utils.py, line 178, in execute
  2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c] return processutils.execute(*cmd, 
**kwargs)
  2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c]   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py, line 
180, in execute
  2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 
91c742e3-8327-4b65-b96a-44dafccbbc1c] cmd=sanitized_cmd)
  2014-10-18 23:27:46.628 30501 TRACE nova.virt.libvirt.vif [instance: 

[Yahoo-eng-team] [Bug 1385468] [NEW] Cells assumes 1:1 compute-service:compute-node mapping

2014-10-24 Thread Jim Rollenhagen
Public bug reported:

Cells capacity calculation seems to assume one compute_node per nova-
compute service. It calculates capacity data by service name,
overwriting the value for each compute_node. This results in the cell
only showing capacity for one compute_host for each nova-compute service
in the cell.

Observed running Ironic, where there are many compute_hosts in a nova-
compute service.

** Affects: nova
 Importance: Undecided
 Assignee: Jim Rollenhagen (jim-rollenhagen)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385468

Title:
  Cells assumes 1:1 compute-service:compute-node mapping

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Cells capacity calculation seems to assume one compute_node per nova-
  compute service. It calculates capacity data by service name,
  overwriting the value for each compute_node. This results in the cell
  only showing capacity for one compute_host for each nova-compute
  service in the cell.

  Observed running Ironic, where there are many compute_hosts in a nova-
  compute service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1385468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385480] [NEW] LVM rescue disk not remove during unrescue

2014-10-24 Thread Dan Genin
Public bug reported:

Rescuing and unrescuing LVM backed instance leaves behind the .rescue
disk image. This is caused by unrescue assuming that instances have file
based disks.

def unrescue(self, instance, network_info):
Reboot the VM which is being rescued back into primary images.

instance_dir = libvirt_utils.get_instance_path(instance)
unrescue_xml_path = os.path.join(instance_dir, 'unrescue.xml')
xml = libvirt_utils.load_file(unrescue_xml_path)
virt_dom = self._lookup_by_name(instance.name)
self._destroy(instance)
self._create_domain(xml, virt_dom)
libvirt_utils.file_delete(unrescue_xml_path) 
rescue_files = os.path.join(instance_dir, *.rescue)
for rescue_file in glob.iglob(rescue_files):
libvirt_utils.file_delete(rescue_file)-- here

The last line delete all of the .rescue files in the instance directory
but does not clean up the .rescue LVM volumes.

--
To reproduce:

1. Configure nova for LVM ephemeral storage with

[libvirt]
images_type = lvm
images_volume_group = nova-lvm

2. Stack
3. Boot an instance with flavor other than nano or micro, so the instance has a 
non-zero disk size
4. Rescue the instance
5. Unrescue the instance
6. Observe the rescue image left in nova-lvm/instance uuid.rescue

** Affects: nova
 Importance: Undecided
 Assignee: Dan Genin (daniel-genin)
 Status: New


** Tags: leak lvm unrescue

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385480

Title:
  LVM rescue disk not remove during unrescue

Status in OpenStack Compute (Nova):
  New

Bug description:
  Rescuing and unrescuing LVM backed instance leaves behind the .rescue
  disk image. This is caused by unrescue assuming that instances have
  file based disks.

  def unrescue(self, instance, network_info):
  Reboot the VM which is being rescued back into primary images.
  
  instance_dir = libvirt_utils.get_instance_path(instance)
  unrescue_xml_path = os.path.join(instance_dir, 'unrescue.xml')
  xml = libvirt_utils.load_file(unrescue_xml_path)
  virt_dom = self._lookup_by_name(instance.name)
  self._destroy(instance)
  self._create_domain(xml, virt_dom)
  libvirt_utils.file_delete(unrescue_xml_path) 
  rescue_files = os.path.join(instance_dir, *.rescue)
  for rescue_file in glob.iglob(rescue_files):
  libvirt_utils.file_delete(rescue_file)-- here

  The last line delete all of the .rescue files in the instance
  directory but does not clean up the .rescue LVM volumes.

  --
  To reproduce:

  1. Configure nova for LVM ephemeral storage with

  [libvirt]
  images_type = lvm
  images_volume_group = nova-lvm

  2. Stack
  3. Boot an instance with flavor other than nano or micro, so the instance has 
a non-zero disk size
  4. Rescue the instance
  5. Unrescue the instance
  6. Observe the rescue image left in nova-lvm/instance uuid.rescue

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1385480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377962] Re: Improperly placed 'only'

2014-10-24 Thread Doug Fish
I'm not seeing enough of a problem here to justify triggering
retranslations of these segments. Just something to add to the code
review watch list I guess.

** Changed in: horizon
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1377962

Title:
  Improperly placed 'only'

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  In several instances we've placed the word 'only' before the verb.
  These fragments could be more clearly written by moving 'only' closer
  to what it is modifying.

  'Name may only contain letters, numbers, underscores, periods and hyphens.'
  'Key pair name may only contain letters, numbers, underscores, spaces and...'
  'The string may only contain ASCII characters and numbers.'
  'The private key will be only used in your browser and will not be sent to...'
  'The requested instance cannot be launched as you only have %(avail)i of...'
  'Launching multiple instances is only supported for images and instance ...'
  'Rules only affect one direction of traffic. The opposite...'
  'Name must start with a letter and may only contain letters, numbers, ...';
  'The \Migration Policy\ is only used if the volume retype cannot...'
  'A volume of %(req)iGB cannot be created as you only have %(avail)iGB of ...'
  'Volume cannot be extended to %(req)iGB as you only have %(avail)iGB of ...'

  It's a minor issue; the meaning is already clear.  I'd like a 2nd
  opinion before changing this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1377962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385484] [NEW] Failed to destroy evacuated disks on init_host

2014-10-24 Thread Fei Long Wang
Public bug reported:

After evacuated successfully, and restarting the failed host to get it
back. User will run into below error.


179Sep 23 01:48:35 node-1 nova-compute 2014-09-23 01:48:35.346 13206 ERROR 
nova.openstack.common.threadgroup [-] error removing image
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
117, in wait
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
x.wait()
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
49, in wait
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 168, in wait
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in switch
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in main
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py, line 483, 
in run_service
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
service.start()
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 163, in start
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1018, in 
init_host
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
self._destroy_evacuated_instances(context)
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 712, in 
_destroy_evacuated_instances
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup bdi, 
destroy_disks)
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 962, in 
destroy
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
destroy_disks, migrate_data)
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 1080, in 
cleanup
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
self._cleanup_rbd(instance)
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 1090, in 
_cleanup_rbd
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
LibvirtDriver._get_rbd_driver().cleanup_volumes(instance)
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/rbd_utils.py, line 238, in 
cleanup_volumes
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
self.rbd.RBD().remove(client.ioctx, volume)
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/rbd.py, line 300, in remove
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup raise 
make_ex(ret, 'error removing image')
2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
ImageBusy: error removing image

** Affects: nova
 Importance: High
 Assignee: Fei Long Wang (flwang)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Fei Long Wang (flwang)

** Changed in: nova
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385484

Title:
  Failed to destroy evacuated disks on init_host

Status in OpenStack Compute (Nova):
  New

Bug 

[Yahoo-eng-team] [Bug 1385485] [NEW] Other in image metadata dialog is hardcoded to english

2014-10-24 Thread Doug Fish
Public bug reported:

Admin-Images-Update Metadata.

Note Other in not translatable.

** Affects: horizon
 Importance: Undecided
 Assignee: Doug Fish (drfish)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) = Doug Fish (drfish)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1385485

Title:
  Other in image metadata dialog is hardcoded to english

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Admin-Images-Update Metadata.

  Note Other in not translatable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1385485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385489] [NEW] ResourceTracker._update_usage_from_migration() is inefficient due to multiple Instance.get_by_uuid() lookups

2014-10-24 Thread Jay Pipes
Public bug reported:

Here is our ResourceTracker._update_usage_from_migration() code:

def _update_usage_from_migrations(self, context, resources,
migrations):

self.tracked_migrations.clear()

filtered = {}

# do some defensive filtering against bad migrations records in the
# database:
for migration in migrations:

instance = migration['instance']

if not instance:
# migration referencing deleted instance
continue

uuid = instance['uuid']

# skip migration if instance isn't in a resize state:
if not self._instance_in_resize_state(instance):
LOG.warn(_(Instance not resizing, skipping migration.),
 instance_uuid=uuid)
continue

# filter to most recently updated migration for each instance:
m = filtered.get(uuid, None)
if not m or migration['updated_at'] = m['updated_at']:
filtered[uuid] = migration

for migration in filtered.values():
instance = migration['instance']
try:
self._update_usage_from_migration(context, instance, None,
  resources, migration)
except exception.FlavorNotFound:
LOG.warn(_(Flavor could not be found, skipping 
   migration.), instance_uuid=uuid)
continue

Unfortunately, when the migration object's 'instance' attribute is
accessed, a call across RPC and DB occurs:

https://github.com/openstack/nova/blob/stable/icehouse/nova/objects/migration.py#L77-L80

@property
def instance(self):
return instance_obj.Instance.get_by_uuid(self._context,
 self.instance_uuid)

For some very strange reason, the code in _update_usage_from_migration()
builds a filtereddictionary with the migration objects that need to be
accounted for in the resource usages, and then once it builds that
filtered dictionary, it goes through the values and calls
_update_usage_from_migration(), passing the migration object's instance
object.

There's no reason to do this at all. The filtered variable can go away
and the call to _update_usage_from_migration() can occur in the main for
loop, using the same instance variable from the original line:

 instance = migration['instance']

That way, for each migration, we don't need to do two lookup by UUID
calls through the conductor to get the migration's instance object...

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385489

Title:
  ResourceTracker._update_usage_from_migration() is inefficient due to
  multiple Instance.get_by_uuid() lookups

Status in OpenStack Compute (Nova):
  New

Bug description:
  Here is our ResourceTracker._update_usage_from_migration() code:

  def _update_usage_from_migrations(self, context, resources,
  migrations):

  self.tracked_migrations.clear()

  filtered = {}

  # do some defensive filtering against bad migrations records in the
  # database:
  for migration in migrations:

  instance = migration['instance']

  if not instance:
  # migration referencing deleted instance
  continue

  uuid = instance['uuid']

  # skip migration if instance isn't in a resize state:
  if not self._instance_in_resize_state(instance):
  LOG.warn(_(Instance not resizing, skipping migration.),
   instance_uuid=uuid)
  continue

  # filter to most recently updated migration for each instance:
  m = filtered.get(uuid, None)
  if not m or migration['updated_at'] = m['updated_at']:
  filtered[uuid] = migration

  for migration in filtered.values():
  instance = migration['instance']
  try:
  self._update_usage_from_migration(context, instance, None,
resources, migration)
  except exception.FlavorNotFound:
  LOG.warn(_(Flavor could not be found, skipping 
 migration.), instance_uuid=uuid)
  continue

  Unfortunately, when the migration object's 'instance' attribute is
  accessed, a call across RPC and DB occurs:

  
https://github.com/openstack/nova/blob/stable/icehouse/nova/objects/migration.py#L77-L80

  @property
  def instance(self):
  return instance_obj.Instance.get_by_uuid(self._context,
   self.instance_uuid)

[Yahoo-eng-team] [Bug 1385533] [NEW] Tokens issued from a saml2 auth ignore inheritance of group roles

2014-10-24 Thread Henry Nash
Public bug reported:

When building the roles in a Keystone  token from a saml2 token, we call
assignment_api.get_roles_for_groups() to add in any group roles.  This
appears to ignore the inheritance flag on the assignment - and puts in
all group roles whether inherited or not.  This means the wrong roles
can end up in the resulting Keystone token.

** Affects: keystone
 Importance: High
 Status: New

** Changed in: keystone
   Importance: Undecided = High

** Description changed:

  When building the roles in a Keystone  token from a saml2 token, we call
  assignment_api.get_roles_for_groups() to add in any group roles.  This
  appears to ignore the inheritance flag on the assignment - and puts in
- all roles whether inherited or not.  This means the wrong roles can end
- up in the resulting Keystone token
+ all group roles whether inherited or not.  This means the wrong roles
+ can end up in the resulting Keystone token.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1385533

Title:
  Tokens issued from a saml2 auth ignore inheritance of group roles

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When building the roles in a Keystone  token from a saml2 token, we
  call assignment_api.get_roles_for_groups() to add in any group roles.
  This appears to ignore the inheritance flag on the assignment - and
  puts in all group roles whether inherited or not.  This means the
  wrong roles can end up in the resulting Keystone token.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1385533/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384580] Re: Rpc and api green thread in neutron server fail to restart worker

2014-10-24 Thread Xurong Yang
** Project changed: neutron = oslo-incubator

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) = Xurong Yang (idopra)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384580

Title:
  Rpc and api green thread in neutron server fail to restart worker

Status in OpenStack Neutron (virtual network service):
  New
Status in The Oslo library incubator:
  New

Bug description:
  When neutorn server starts, it will spawn two green threads for rpc
  and api. If multi rpc workers and api workers are configured, each
  green thread will fork several child processes to handle the requests,
  then os.waitpid(0, os.WNOHANG) is called so if one child process
  exits, it can restart the child process to guarantee the number of
  running workers.

  Here comes the problem, both rpc and api green thread will maintain a
  list of pid of the child processes they fork, only when the pid return
  from os.waitpid is in the list, the child process will be restarted.
  But since rpc and api green thread are in the same parent process,
  there is one scenario that rpc green thread get a pid from os.waitpid
  which is forked by api green thread, of course this pid is not in the
  pid list of rpc green thread, so it will not restart the child process
  and there is no chance the child process can be restarted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1384580/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384347] Re: Couldn't run instance with existing port when default security group is absent

2014-10-24 Thread ugvddm
yes it is, I have found  your said that we can't delete the defualt
group, thus we should change it to invalid.

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384347

Title:
  Couldn't run instance with existing port when default security group
  is absent

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  If default security group in tenant is deleted (admin has appropriate
  permissions) then launching an instance with Neutron port fails at
  allocate network resources stage:

  ERROR nova.compute.manager [-] Instance failed network setup after 1 
attempt(s)
  TRACE nova.compute.manager Traceback (most recent call last):
  TRACE nova.compute.manager   File /opt/stack/nova/nova/compute/manager.py, 
line 1528, in _allocate_network_async
  TRACE nova.compute.manager dhcp_options=dhcp_options)
  TRACE nova.compute.manager   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 294, in 
allocate_for_instance
  TRACE nova.compute.manager security_group_id=security_group)
  TRACE nova.compute.manager SecurityGroupNotFound: Security group default not 
found.

  Steps to reproduce:
  0. Delete the default security group with admin account.
  1. Create custom security group
  2. Create a network and a subnet
  3. Create a port in the subnet with the custom security group
  4. Launch an instance with the port (and don't specify any security group)

  Launch command is accepted successfully, but 'nova show' command
  returns the instance in error state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1384347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342220] Re: pkg_resources.DistributionNotFound: virtualenv=1.9.1 on bare-centos6-hpcloud-b3-901545 slave

2014-10-24 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342220

Title:
  pkg_resources.DistributionNotFound: virtualenv=1.9.1 on  bare-
  centos6-hpcloud-b3-901545 slave

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  Jenkins slaves bare-centos6-hpcloud-b3-901545 and  bare-
  centos6-hpcloud-b4-901362 are buggy as they do not have correct
  packages to run builds. This can be seen at [1][2]

  
[1]https://jenkins06.openstack.org/job/gate-python-keystoneclient-python26/353/
  [2]https://jenkins04.openstack.org/job/gate-neutron-python26/2280/

  2014-07-15 15:53:56.626 | Started by user anonymous
  2014-07-15 15:53:56.628 | Building remotely on bare-centos6-hpcloud-b3-901545 
in workspace /home/jenkins/workspace/gate-neutron-python26
  2014-07-15 15:54:30.412 | [gate-neutron-python26] $ /bin/bash 
/tmp/hudson7323049886741892388.sh
  2014-07-15 15:54:31.684 | [gate-neutron-python26] $ /bin/bash -xe 
/tmp/hudson4801467944983281998.sh
  2014-07-15 15:54:31.721 | + 
/usr/local/jenkins/slave_scripts/gerrit-git-prep.sh 
https://review.openstack.org git://git.openstack.org
  2014-07-15 15:54:31.721 | Triggered by: https://review.openstack.org/105542
  2014-07-15 15:54:31.722 | + [[ ! -e .git ]]
  2014-07-15 15:54:31.722 | + ls -a
  2014-07-15 15:54:31.723 | .
  2014-07-15 15:54:31.723 | ..
  2014-07-15 15:54:31.723 | + rm -fr '.[^.]*' '*'
  2014-07-15 15:54:31.724 | + '[' -d /opt/git/openstack/neutron/.git ']'
  2014-07-15 15:54:31.724 | + git clone file:///opt/git/openstack/neutron .
  2014-07-15 15:54:31.724 | Initialized empty Git repository in 
/home/jenkins/workspace/gate-neutron-python26/.git/
  2014-07-15 15:54:51.015 | + git remote set-url origin 
git://git.openstack.org/openstack/neutron
  2014-07-15 15:54:51.017 | + git remote update
  2014-07-15 15:54:51.019 | Fetching origin
  2014-07-15 15:54:52.524 | From git://git.openstack.org/openstack/neutron
  2014-07-15 15:54:52.526 |  * [new branch]  stable/havana - 
origin/stable/havana
  2014-07-15 15:54:52.526 |  * [new branch]  stable/icehouse - 
origin/stable/icehouse
  2014-07-15 15:54:52.526 | + git reset --hard
  2014-07-15 15:54:52.648 | HEAD is now at 2d4b75b Merge Add -s option for 
neutron metering rules
  2014-07-15 15:54:52.649 | + git clean -x -f -d -q
  2014-07-15 15:54:52.655 | + echo 
refs/zuul/master/Z4f38c6ec4f09451f856a3d642a5f7388
  2014-07-15 15:54:52.655 | + grep -q '^refs/tags/'
  2014-07-15 15:54:52.657 | + '[' -z '' ']'
  2014-07-15 15:54:52.657 | + git fetch 
http://zm02.openstack.org/p/openstack/neutron 
refs/zuul/master/Z4f38c6ec4f09451f856a3d642a5f7388
  2014-07-15 15:54:55.589 | From http://zm02.openstack.org/p/openstack/neutron
  2014-07-15 15:54:55.589 |  * branch
refs/zuul/master/Z4f38c6ec4f09451f856a3d642a5f7388 - FETCH_HEAD
  2014-07-15 15:54:55.590 | + git checkout FETCH_HEAD
  2014-07-15 15:54:55.616 | Note: checking out 'FETCH_HEAD'.
  2014-07-15 15:54:55.616 |
  2014-07-15 15:54:55.616 | You are in 'detached HEAD' state. You can look 
around, make experimental
  2014-07-15 15:54:55.616 | changes and commit them, and you can discard any 
commits you make in this
  2014-07-15 15:54:55.616 | state without impacting any branches by performing 
another checkout.
  2014-07-15 15:54:55.616 |
  2014-07-15 15:54:55.616 | If you want to create a new branch to retain 
commits you create, you may
  2014-07-15 15:54:55.616 | do so (now or later) by using -b with the checkout 
command again. Example:
  2014-07-15 15:54:55.617 |
  2014-07-15 15:54:55.617 |   git checkout -b new_branch_name
  2014-07-15 15:54:55.617 |
  2014-07-15 15:54:55.617 | HEAD is now at fc5f2a9... Merge commit 
'refs/changes/42/105542/6' of 
ssh://review.openstack.org:29418/openstack/neutron into HEAD
  2014-07-15 15:54:55.617 | + git reset --hard FETCH_HEAD
  2014-07-15 15:54:55.632 | HEAD is now at fc5f2a9 Merge commit 
'refs/changes/42/105542/6' of 
ssh://review.openstack.org:29418/openstack/neutron into HEAD
  2014-07-15 15:54:55.633 | + git clean -x -f -d -q
  2014-07-15 15:54:55.639 | + '[' -f .gitmodules ']'
  2014-07-15 15:54:56.013 | [gate-neutron-python26] $ /bin/bash -xe 
/tmp/hudson691292143843068196.sh
  2014-07-15 15:54:56.053 | + /usr/local/jenkins/slave_scripts/run-unittests.sh 
26 openstack neutron
  2014-07-15 15:54:56.053 | + version=26
  2014-07-15 15:54:56.053 | + org=openstack
  2014-07-15 15:54:56.053 | + project=neutron
  2014-07-15 15:54:56.053 | + source 
/usr/local/jenkins/slave_scripts/functions.sh
  2014-07-15 15:54:56.054 | + check_variable_version_org_project 26 openstack 
neutron /usr/local/jenkins/slave_scripts/run-unittests.sh
  2014-07-15 15:54:56.054 | + version=26
  2014-07-15 15:54:56.054 | + org=openstack
  2014-07-15 15:54:56.054 | +