[Yahoo-eng-team] [Bug 1378187] [NEW] IPv6 unit tests should have ipv6 rather v6

2014-10-07 Thread Akihiro Motoki
Public bug reported:

At now some plugins do not have IPv6 support and keyword v6 is checked to 
skip IPv6 unit tests in these plugins.
v6 is too short and may be used in a different context.
It is better to have more clear keyword like ipv6 in IPv6 related unit tests.

https://review.openstack.org/#/c/126407/3/neutron/tests/unit/opencontrail/test_contrail_plugin.py

** Affects: neutron
 Importance: Low
 Status: New


** Tags: low-hanging-fruit unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1378187

Title:
  IPv6 unit tests should have ipv6 rather v6

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  At now some plugins do not have IPv6 support and keyword v6 is checked to 
skip IPv6 unit tests in these plugins.
  v6 is too short and may be used in a different context.
  It is better to have more clear keyword like ipv6 in IPv6 related unit 
tests.

  
https://review.openstack.org/#/c/126407/3/neutron/tests/unit/opencontrail/test_contrail_plugin.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1378187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377304] Re: Deleting endpoint group project fails

2014-10-07 Thread Thierry Carrez
** No longer affects: keystone/juno

** Changed in: keystone
Milestone: kilo-1 = juno-rc2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1377304

Title:
  Deleting endpoint group project fails

Status in OpenStack Identity (Keystone):
  Fix Committed

Bug description:
  Deleting a endpoint group project fails because the router specifies a
  controller method that doesn't exist. Adding this test highlights the
  error:

  def test_removing_an_endpoint_group_project(self):
  # create endpoint group
  endpoint_group_id = self._create_valid_endpoint_group(
  self.DEFAULT_ENDPOINT_GROUP_URL, self.DEFAULT_ENDPOINT_GROUP_BODY)

  # create an endpoint_group project
  url = self._get_project_endpoint_group_url(
  endpoint_group_id, self.default_domain_project_id)
  self.put(url)

  # remove the endpoint group project
  self.delete(url)
  self.get(url, expected_status=404)

  The `self.delete(url)` fails with the following error: 
  AttributeError: 'ProjectEndpointGroupV3Controller' object has no 
attribute 'remove_endpo
  int_group_from_project'

  This returns a 500 error to the user for what should be a successful
  operation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1377304/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375937] Re: Downgrade of federation extension can fail due to FKs

2014-10-07 Thread Thierry Carrez
** No longer affects: keystone/juno

** Changed in: keystone
Milestone: None = juno-rc2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1375937

Title:
  Downgrade of federation extension can fail due to FKs

Status in OpenStack Identity (Keystone):
  Fix Committed

Bug description:
  In the 001 migration script of federation, we delete the tables in the
  wrong order - we should delete the federation_protocol table first,
  otherwise its FKs to the identity provider cause a problem

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1375937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363047] Re: test_sql_upgrade and live_test not working for non-sqlite DBs

2014-10-07 Thread Thierry Carrez
** No longer affects: keystone/juno

** Changed in: keystone
Milestone: None = juno-rc2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1363047

Title:
  test_sql_upgrade and live_test not working for non-sqlite DBs

Status in OpenStack Identity (Keystone):
  Fix Committed

Bug description:
  It appears that our sql upgrade unit tests are broken for DBs that
  properly support FKs (teardown fails due to FK constraints).  I
  suspect this is because we no longer have the downgrade steps below
  034 (since they were squashed).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1363047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378215] [NEW] If db deadlock occurs for some reason while deleting an image, no one can delete the image any more

2014-10-07 Thread Ankit Agrawal
Public bug reported:

Glance api returns 500 Internal Server Error, if db deadlock occurs in 
glance-registry for some reason while deleting an image. 
The image 'status' is set to deleted and 'deleted' is set to False. As deleted 
is still False, the image is visible in image list but it can not be deleted 
any more.

If you try to delete this image it will raise 404 (Not Found) error for
V1 api and 500 (HTTPInternalServerError) for V2 api.

Note:
To reproduce this issue I've explicitly raised db_exception.DBDeadlock 
exception from _image_child_entry_delete_all method under 
\glance\db\sqlalchemy\api.py.

glance-api.log
--
2014-10-06 00:53:10.037 6827 INFO glance.registry.client.v1.client 
[2b47d213-6f80-410f-9766-dc80607f0224 7e7c3a413f184dbcb9a65404dbfcc0f0 
309c5ff4082c423
1bcc17d8c55c83997 - - -] Registry client request DELETE 
/images/f9f8a40d-530b-498c-9fbc-86f29da555f4 raised ServerError
2014-10-06 00:53:10.045 6827 INFO glance.wsgi.server 
[2b47d213-6f80-410f-9766-dc80607f0224 7e7c3a413f184dbcb9a65404dbfcc0f0 
309c5ff4082c4231bcc17d8c55c83997 - - -] Traceback (most recent call last):
  File /usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py, line 433, in 
handle_one_response
result = self.application(self.environ, start_response)
  File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File /opt/stack/glance/glance/common/wsgi.py, line 394, in __call__
response = req.get_response(self.application)
  File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1320, in 
send
application, catch_exc_info=False)
  File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
app_iter = application(self.environ, start_response)
  File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File /usr/local/lib/python2.7/dist-packages/osprofiler/web.py, line 106, in 
__call__
return request.get_response(self.application)
  File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1320, in 
send
application, catch_exc_info=False)
  File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
app_iter = application(self.environ, start_response)
  File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py, line 
748, in __call__
return self._call_app(env, start_response)
  File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py, line 
684, in _call_app
return self._app(env, _fake_start_response)
  File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File /opt/stack/glance/glance/common/wsgi.py, line 394, in __call__
response = req.get_response(self.application)
  File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1320, in 
send
application, catch_exc_info=False)
  File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
app_iter = application(self.environ, start_response)
  File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File /opt/stack/glance/glance/common/wsgi.py, line 394, in __call__
response = req.get_response(self.application)
  File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1320, in 
send
application, catch_exc_info=False)
  File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
app_iter = application(self.environ, start_response)
  File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in 
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in 
call_func
return self.func(req, *args, **kwargs)
  File /opt/stack/glance/glance/common/wsgi.py, line 394, in __call__
response = req.get_response(self.application)
  File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1320, in 
send
application, catch_exc_info=False)
  File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
app_iter = application(self.environ, start_response)
  File /usr/lib/python2.7/dist-packages/paste/urlmap.py, line 206, in __call__
return 

[Yahoo-eng-team] [Bug 1275256] Re: failed to reach SHELVED_OFFLOADED status during tempest-dsvm

2014-10-07 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1275256

Title:
  failed to reach SHELVED_OFFLOADED status during tempest-dsvm

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I added 2 log messages to glance and then my gate broke. It looks like
  it got stuck in SHELVED and failed to reach SHELVED_OFFLOADED,
  although I lack the knowledge to know what this really means. I'm not
  even sure if this bug belongs in nova or elsewhere.

  http://logs.openstack.org/89/68189/9/gate/gate-tempest-dsvm-postgres-
  full/ff84764/console.html

  
   Traceback (most recent call last):
  2014-02-01 09:59:15.116 |   File 
tempest/api/compute/v3/servers/test_servers_negative.py, line 403, in 
test_shelve_shelved_server
  2014-02-01 09:59:15.116 | extra_timeout=offload_time)
  2014-02-01 09:59:15.116 |   File 
tempest/services/compute/v3/json/servers_client.py, line 169, in 
wait_for_server_status
  2014-02-01 09:59:15.116 | raise_on_error=raise_on_error)
  2014-02-01 09:59:15.116 |   File tempest/common/waiters.py, line 89, in 
wait_for_server_status
  2014-02-01 09:59:15.116 | raise exceptions.TimeoutException(message)
  2014-02-01 09:59:15.116 | TimeoutException: Request timed out
  2014-02-01 09:59:15.116 | Details: Server 
d193bafb-11cf-4592-8be7-174d2f94a68d failed to reach SHELVED_OFFLOADED status 
and task state None within the required time (196 s). Current status: 
SHELVED. Current task state: None.
  2014-02-01 09:59:15.116 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1275256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378233] [NEW] Provide an option to ignore suspended VMs in the resource count

2014-10-07 Thread Arthur Lutz (Logilab)
Public bug reported:

It would be very useful for our case scenario to be able to have an
option that enables not counting suspended machines as consuming
resources. The use case is having little memory available and still
being able to launch new VMs when old VMs are in suspended mode. We
understand that once the compute node's memory is full we won't be able
to resume these machines, but that is OK with the way we're using our
cloud.

For example a compute node with 8G of RAM, launch 1 VM with 4G and
another with 2G, then suspend them both, one could then launch a new VM
with 4G of RAM (the actual memory on the compute node is free).

On essex we had the following patch for this scenario to work :

Index: nova/nova/scheduler/host_manager.py
===
--- nova.orig/nova/scheduler/host_manager.py
+++ nova/nova/scheduler/host_manager.py
@@ -337,6 +337,8 @@ class HostManager(object):
 if not host:
 continue
 host_state = host_state_map.get(host, None)
+if instance.get('power_state', 1) != 1: # power_state.RUNNING
+continue
 if not host_state:
 continue
 host_state.consume_from_instance(instance)

We're looking into patching icehouse for the same behaviour but would
like to add it as an option this time.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378233

Title:
  Provide an option to ignore suspended VMs in the resource count

Status in OpenStack Compute (Nova):
  New

Bug description:
  It would be very useful for our case scenario to be able to have an
  option that enables not counting suspended machines as consuming
  resources. The use case is having little memory available and still
  being able to launch new VMs when old VMs are in suspended mode. We
  understand that once the compute node's memory is full we won't be
  able to resume these machines, but that is OK with the way we're using
  our cloud.

  For example a compute node with 8G of RAM, launch 1 VM with 4G and
  another with 2G, then suspend them both, one could then launch a new
  VM with 4G of RAM (the actual memory on the compute node is free).

  On essex we had the following patch for this scenario to work :

  Index: nova/nova/scheduler/host_manager.py
  ===
  --- nova.orig/nova/scheduler/host_manager.py
  +++ nova/nova/scheduler/host_manager.py
  @@ -337,6 +337,8 @@ class HostManager(object):
   if not host:
   continue
   host_state = host_state_map.get(host, None)
  +if instance.get('power_state', 1) != 1: # power_state.RUNNING
  +continue
   if not host_state:
   continue
   host_state.consume_from_instance(instance)

  We're looking into patching icehouse for the same behaviour but would
  like to add it as an option this time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266175] Re: 'Import contextlib should be in the section with standard python libraries

2014-10-07 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: next = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1266175

Title:
  'Import contextlib should be in the section with standard python
  libraries

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  There are a number of cases when the import is in the wrong section

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1266175/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287292] Re: VMware: vim.get_soap_url improper IPv6 address

2014-10-07 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: next = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1287292

Title:
  VMware: vim.get_soap_url improper IPv6 address

Status in OpenStack Compute (Nova):
  Fix Released
Status in The Oslo library incubator:
  Fix Released

Bug description:
  The vim.get_soap_url function incorrectly builds an IPv6 address using
  hostname/IP and port.

  https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vim.py#L151

  The result of this line would create an address as follows:
  https://[2001:db8:85a3:8d3:1319:8a2e:370:7348:443]/sdk

  Ports should be outside the square brackets, not inside, as follows:

  https://[2001:db8:85a3:8d3:1319:8a2e:370:7348]:443/sdk

  For reference see: http://en.wikipedia.org/wiki/IPv6_address section
  Literal IPv6 addresses in network resource identifiers

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1287292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270573] Re: VMware: resize does not wait for task to complete

2014-10-07 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: next = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270573

Title:
  VMware: resize does not wait for task to complete

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The opertaion did not wait for the task to complete. In some edge
  cases this may fail

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257726] Re: VMware: refactor volumeops._get_volume_uuid()

2014-10-07 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: ongoing = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257726

Title:
  VMware: refactor volumeops._get_volume_uuid()

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Recently I have been doing some queries for extraConfig VM options and
  found that the most efficient way to retrieve a given property is to
  do:

  session._call_method(vim_util, 'get_dynamic_property', vm_ref,
  'VirtualMachine', 'config.extraConfig[some_prop_here]')

  Right now we ask for all extraConfig options and then we iterate over
  the result set to find a particular one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1257726/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244918] Re: VMware ESX: Boot from volume errors out due to relocate

2014-10-07 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: next = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244918

Title:
  VMware ESX: Boot from volume errors out due to relocate

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When trying to perform boot instance from volume using the
  VMwareESXDriver, the operation errors out.

  Command:
  $ nova boot --flavor 1  --block-device-mapping 
vda=222e8ece-8723-4930-803c-8ae5cf233a87:::0 vm1

  Log messages

  d3-59fce43903e8] Root volume attach. Driver type: vmdk attach_root_volume 
/opt/stack/nova/nova/virt/vmwareapi/volumeops.py:458
  2013-10-26 14:49:13.393 30706 WARNING nova.virt.vmwareapi.driver [-] Task 
[RelocateVM_Task] (returnval){
 value = haTask-162-vim.VirtualMachine.relocate-327302855
 _type = Task
   } status: error The operation is not supported on the object.
  2013-10-26 14:49:13.394 30706 ERROR nova.compute.manager 
[req-e95b7262-a70c-436b-a9d5-0b8045cbf3f5 4471d6567a6b4dd29affbc849f3814d9 
256df8ea370d4de2b40edfe9b0ea4063] [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8] Instance failed to spawn
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8] Traceback (most recent call last):
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8]   File 
/opt/stack/nova/nova/compute/manager.py, line 1410, in _spawn
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8] block_device_info)
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8]   File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 178, in spawn
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8] admin_password, network_info, 
block_device_info)
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8]   File 
/opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 538, in spawn
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8] data_store_ref)
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8]   File 
/opt/stack/nova/nova/virt/vmwareapi/volumeops.py, line 467, in 
attach_root_volume
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8] 
self._relocate_vmdk_volume(volume_ref, res_pool, datastore)
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8]   File 
/opt/stack/nova/nova/virt/vmwareapi/volumeops.py, line 295, in 
_relocate_vmdk_volume
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8] 
self._session._wait_for_task(task.value, task)
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8]   File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 901, in _wait_for_task
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8] ret_val = done.wait()
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8]   File 
/usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8] return hubs.get_hub().switch()
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8]   File 
/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in 
switch
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8] return self.greenlet.switch()
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8] NovaException: The operation is not 
supported on the object.
  2013-10-26 14:49:13.394 30706 TRACE nova.compute.manager [instance: 
6b190d4d-231d-43ec-86d3-59fce43903e8]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1244918/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1223309] Re: v3 security groups's attribute without prefix in create's response

2014-10-07 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: next = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1223309

Title:
  v3 security groups's attribute without prefix in create's response

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Both for xml and json:

  {
  server: {
  admin_pass: %(password)s,
  id: %(id)s,
  links: [
  {
  href: http://openstack.example.com/v3/servers/%(uuid)s,
  rel: self
  },
  {
  href: http://openstack.example.com/servers/%(uuid)s,
  rel: bookmark
  }
  ],
  security_groups: [{name: test}]
  }
  }

  
  ?xml version='1.0' encoding='UTF-8'?
  server xmlns:atom=http://www.w3.org/2005/Atom; 
xmlns=http://docs.openstack.org/compute/api/v1.1; id=%(id)s 
admin_pass=%(password)s
metadata/
atom:link href=%(host)s/v3/servers/%(uuid)s rel=self/
atom:link href=%(host)s/servers/%(uuid)s rel=bookmark/
security_groups
security_group name=test /
/security_groups
  /server


  'security_groups' should be 'os-security-groups:security_groups'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1223309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1196924] Re: Stop and Delete operations should give the Guest a chance to shutdown

2014-10-07 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

** Changed in: nova
Milestone: next = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1196924

Title:
  Stop and Delete operations should give the Guest a chance to shutdown

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Currently in libvirt stop and delete operations simply destroy the
  underlying VM. Some GuestOS's do not react well to this type of
  power failure, and it would be better if these operations followed the
  same approach a a soft_reboot and give the guest a chance to shutdown
  gracefully.   Even where VM is being deleted, it may be booted from a
  volume which will be reused on another server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1196924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378240] [NEW] It would be nice if modal windows could be moved and repositioned on the screen

2014-10-07 Thread Matthias Runge
Public bug reported:

Description of problem:

I've had situations where I had a modal window open to create/edit
something, and wanted to look at the table behind it quick to
doublecheck another instance of whatever I was editing. I keep trying to
move the window out of the way and keep it open in those instances, but
it hasn't worked yet :) I've had to give up and cancel out of the dialog
to get it out of the way and start over after getting the information I
wanted.

It'd be nice if it could be moved around the screen when necessary, like
many other modal dialog implementations.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1378240

Title:
  It would be nice if modal windows could be moved and repositioned on
  the screen

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description of problem:

  I've had situations where I had a modal window open to create/edit
  something, and wanted to look at the table behind it quick to
  doublecheck another instance of whatever I was editing. I keep trying
  to move the window out of the way and keep it open in those instances,
  but it hasn't worked yet :) I've had to give up and cancel out of the
  dialog to get it out of the way and start over after getting the
  information I wanted.

  It'd be nice if it could be moved around the screen when necessary,
  like many other modal dialog implementations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1378240/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377304] Re: Deleting endpoint group project fails

2014-10-07 Thread Thierry Carrez
** Changed in: keystone
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1377304

Title:
  Deleting endpoint group project fails

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  Deleting a endpoint group project fails because the router specifies a
  controller method that doesn't exist. Adding this test highlights the
  error:

  def test_removing_an_endpoint_group_project(self):
  # create endpoint group
  endpoint_group_id = self._create_valid_endpoint_group(
  self.DEFAULT_ENDPOINT_GROUP_URL, self.DEFAULT_ENDPOINT_GROUP_BODY)

  # create an endpoint_group project
  url = self._get_project_endpoint_group_url(
  endpoint_group_id, self.default_domain_project_id)
  self.put(url)

  # remove the endpoint group project
  self.delete(url)
  self.get(url, expected_status=404)

  The `self.delete(url)` fails with the following error: 
  AttributeError: 'ProjectEndpointGroupV3Controller' object has no 
attribute 'remove_endpo
  int_group_from_project'

  This returns a 500 error to the user for what should be a successful
  operation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1377304/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375937] Re: Downgrade of federation extension can fail due to FKs

2014-10-07 Thread Thierry Carrez
** Changed in: keystone
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1375937

Title:
  Downgrade of federation extension can fail due to FKs

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  In the 001 migration script of federation, we delete the tables in the
  wrong order - we should delete the federation_protocol table first,
  otherwise its FKs to the identity provider cause a problem

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1375937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363047] Re: test_sql_upgrade and live_test not working for non-sqlite DBs

2014-10-07 Thread Thierry Carrez
** Changed in: keystone
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1363047

Title:
  test_sql_upgrade and live_test not working for non-sqlite DBs

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  It appears that our sql upgrade unit tests are broken for DBs that
  properly support FKs (teardown fails due to FK constraints).  I
  suspect this is because we no longer have the downgrade steps below
  034 (since they were squashed).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1363047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378252] [NEW] White stripe at the bottom of sidebar

2014-10-07 Thread Tatiana Ovchinnikova
Public bug reported:

Close 'Project' and 'Admin' panels and open 'Identity'. There is a white
stripe at the bottom of the block, however there should be light gray
(#f9f9f9).

** Affects: horizon
 Importance: Low
 Assignee: Tatiana Ovchinnikova (tmazur)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Tatiana Ovchinnikova (tmazur)

** Changed in: horizon
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1378252

Title:
  White stripe at the bottom of sidebar

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Close 'Project' and 'Admin' panels and open 'Identity'. There is a
  white stripe at the bottom of the block, however there should be light
  gray (#f9f9f9).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1378252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378270] [NEW] keystone-manage db_sync command failed

2014-10-07 Thread Swami Reddy
Public bug reported:

When I ran stack.sh from the latest devstack, the below error shown:
==
2014-10-07 09:21:50.826 | + mysql -uroot -pcloud -h127.0.0.1 -e 'CREATE 
DATABASE keystone CHARACTER SETutf8;'
2014-10-07 09:21:50.831 | + /opt/stack/keystone/bin/keystone-manage db_sync
2014-10-07 09:21:51.435 | Traceback (most recent call last):
2014-10-07 09:21:51.436 |   File /opt/stack/keystone/bin/keystone-manage, 
line 30, in module
2014-10-07 09:21:51.436 | from keystone import cli
2014-10-07 09:21:51.436 |   File /opt/stack/keystone/keystone/cli.py, line 
31, in module
2014-10-07 09:21:51.436 | from keystone import token
2014-10-07 09:21:51.437 |   File 
/opt/stack/keystone/keystone/token/__init__.py, line 15, in module
2014-10-07 09:21:51.437 | from keystone.token import controllers  # noqa
2014-10-07 09:21:51.437 |   File 
/opt/stack/keystone/keystone/token/controllers.py, line 31, in modu  
 le
2014-10-07 09:21:51.437 | from keystone.token import provider
2014-10-07 09:21:51.438 |   File 
/opt/stack/keystone/keystone/token/provider.py, line 37, in module
2014-10-07 09:21:51.438 | from keystone.token import persistence
2014-10-07 09:21:51.438 |   File 
/opt/stack/keystone/keystone/token/persistence/__init__.py, line 13,  
  in module
2014-10-07 09:21:51.439 | from keystone.token.persistence.core import *  # 
noqa
2014-10-07 09:21:51.439 |   File 
/opt/stack/keystone/keystone/token/persistence/core.py, line 44, in   
 module
2014-10-07 09:21:51.439 | class PersistenceManager(manager.Manager):
2014-10-07 09:21:51.439 |   File 
/opt/stack/keystone/keystone/token/persistence/core.py, line 58, in   
 PersistenceManager
2014-10-07 09:21:51.440 | what='token_api.unique_id')
2014-10-07 09:21:51.440 |   File 
/opt/stack/keystone/keystone/openstack/common/versionutils.py, line   
 128, in __call__
2014-10-07 09:21:51.440 | @six.wraps(func_or_cls)
2014-10-07 09:21:51.440 | AttributeError: 'module' object has no attribute 
'wraps'
2014-10-07 09:21:51.479 | + exit_trap
2014-10-07 09:21:51.479 | + local r=1
2014-10-07 09:21:51.479 | ++ jobs -p
2014-10-07 09:21:51.480 | + jobs=
2014-10-07 09:21:51.480 | + [[ -n '' ]]
2014-10-07 09:21:51.480 | + kill_spinner
2014-10-07 09:21:51.481 | + '[' '!' -z '' ']'
2014-10-07 09:21:51.481 | + [[ 1 -ne 0 ]]
2014-10-07 09:21:51.481 | + echo 'Error on exit'
2014-10-07 09:21:51.481 | Error on exit
2014-10-07 09:21:51.481 | + [[ -z /opt/stack/logs ]]
2014-10-07 09:21:51.481 | + /home/swami/devstack/tools/worlddump.py -d 
/opt/stack/logs
2014-10-07 09:21:51.526 | + exit 1
==


Looks like this is similar to https://bugs.launchpad.net/nova/+bug/1083054

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: keystone

** Tags added: keystone

** Project changed: nova = keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378270

Title:
  keystone-manage db_sync command failed

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When I ran stack.sh from the latest devstack, the below error shown:
  ==
  2014-10-07 09:21:50.826 | + mysql -uroot -pcloud -h127.0.0.1 -e 'CREATE 
DATABASE keystone CHARACTER SETutf8;'
  2014-10-07 09:21:50.831 | + /opt/stack/keystone/bin/keystone-manage db_sync
  2014-10-07 09:21:51.435 | Traceback (most recent call last):
  2014-10-07 09:21:51.436 |   File /opt/stack/keystone/bin/keystone-manage, 
line 30, in module
  2014-10-07 09:21:51.436 | from keystone import cli
  2014-10-07 09:21:51.436 |   File /opt/stack/keystone/keystone/cli.py, line 
31, in module
  2014-10-07 09:21:51.436 | from keystone import token
  2014-10-07 09:21:51.437 |   File 
/opt/stack/keystone/keystone/token/__init__.py, line 15, in module
  2014-10-07 09:21:51.437 | from keystone.token import controllers  # noqa
  2014-10-07 09:21:51.437 |   File 
/opt/stack/keystone/keystone/token/controllers.py, line 31, in modu  
 le
  2014-10-07 09:21:51.437 | from keystone.token import provider
  2014-10-07 09:21:51.438 |   File 
/opt/stack/keystone/keystone/token/provider.py, line 37, in module
  2014-10-07 09:21:51.438 | from keystone.token import persistence
  2014-10-07 09:21:51.438 |   File 
/opt/stack/keystone/keystone/token/persistence/__init__.py, line 13,  
  in module
  2014-10-07 09:21:51.439 | from keystone.token.persistence.core import *  
# noqa
  2014-10-07 09:21:51.439 |   File 
/opt/stack/keystone/keystone/token/persistence/core.py, line 44, in   
 module
  2014-10-07 09:21:51.439 | class PersistenceManager(manager.Manager):
  2014-10-07 09:21:51.439 |   File 

[Yahoo-eng-team] [Bug 1367432] Re: developer docs don't include metadata definitions concepts

2014-10-07 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1367432

Title:
  developer docs don't include metadata definitions concepts

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released

Bug description:
  The below site has the API docs, but there isn't any other mention of
  the metadata definitions concepts.

  http://docs.openstack.org/developer/glance/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1367432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373993] Re: Trusted Filter uses unsafe SSL connection

2014-10-07 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373993

Title:
  Trusted Filter uses unsafe SSL connection

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  HTTPSClientAuthConnection uses httplib.HTTPSConnection objects. In
  Python 2.x those do not perform CA checks so client connections are
  vulnerable to MiM attacks.

  This should be changed to use the requests lib.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348818] Re: Unittests do not succeed with random PYTHONHASHSEED value

2014-10-07 Thread Thierry Carrez
** Changed in: barbican
Milestone: juno-rc1 = None

** No longer affects: barbican

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348818

Title:
  Unittests do not succeed with random PYTHONHASHSEED value

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in Designate:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) icehouse series:
  New
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  In Progress
Status in Keystone icehouse series:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Triaged
Status in Python client library for Neutron:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in Openstack Database (Trove):
  Fix Released
Status in Web Services Made Easy:
  New

Bug description:
  New tox and python3.3 set a random PYTHONHASHSEED value by default.
  These projects should support this in their unittests so that we do
  not have to override the PYTHONHASHSEED value and potentially let bugs
  into these projects.

  To reproduce these failures:

  # install latest tox
  pip install --upgrade tox
  tox --version # should report 1.7.2 or greater
  cd $PROJECT_REPO
  # edit tox.ini to remove any PYTHONHASHSEED=0 lines
  tox -epy27

  Most of these failures appear to be related to dict entry ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1348818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376945] Re: os-networks extension displays cidr incorrectly

2014-10-07 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376945

Title:
  os-networks extension displays cidr incorrectly

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The nova-networks extension is improperly converting cidr values to
  strings:

  $ nova network-list

  shows a list of ips for cidr:

  [u'192.168.50.0', u'192.168.50.1', u'192.168.50.2',
  u'192.168.50.3',...]

  This is possibly due to the extension being updated to use objects,
  but I don't recall seeing it previously, so it is possible something
  changed the way an ipnetwork is converted to json so that it now
  iterates through the object isntead of printing it as a string.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369581] Re: compute-trust.json provides invalid data for trust filter

2014-10-07 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1369581

Title:
  compute-trust.json provides invalid data for trust filter

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released

Bug description:
  compute-trust.json provides such properties for trust filter:

  properties: {
trust:trusted_host: {
title: Intel® TXT attestation,
description: Select to ensure that node has been attested by 
Intel® Trusted Execution Technology (Intel® TXT).,
type: boolean
}
  }

  This means that actually we require True/False values for trust
  levels. This does not match with how Trust Filter works (comment from
  trust filter):

  Filter that only schedules tasks on a host if the integrity (trust)
  of that host matches the trust requested in the ``extra_specs`` for the
  flavor.  The ``extra_specs`` will contain a key/value pair where the
  key is ``trust``.  The value of this pair (``trusted``/``untrusted``) must
  match the integrity of that host (obtained from the Attestation
  service) before the task can be scheduled on that host.

  There is also level 'unknown' available:

  def _init_cache_entry(self, host):
  self.compute_nodes[host] = {
  'trust_lvl': 'unknown',
  'vtime': timeutils.normalize_time(
  timeutils.parse_isotime(1970-01-01T00:00:00Z))}

  This means that compute-trust.json should be changed to match trust
  levels that are expected by Trust Filter.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1369581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376307] Re: nova compute is crashing with the error TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'

2014-10-07 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376307

Title:
  nova compute is crashing with the error TypeError: unsupported operand
  type(s) for /: 'NoneType' and 'int'

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  nova compute is crashing with the below error when nova compute is
  started

  
  2014-10-01 14:50:26.854 ^[[00;32mDEBUG nova.virt.libvirt.driver 
[^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mUpdating host stats^[[00m ^[[00;33mfrom 
(pid=9945) update_status /opt/stack/nova/nova/virt/libvirt/driver.py:6361^[[00m
  Traceback (most recent call last):
File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 
449, in fire_timers
  timer()
File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py, line 
58, in __call__
  cb(*args, **kw)
File /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 167, 
in _do_send
  waiter.switch(result)
File /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 
207, in main
  result = function(*args, **kwargs)
File /opt/stack/nova/nova/openstack/common/service.py, line 490, in 
run_service
  service.start()
File /opt/stack/nova/nova/service.py, line 181, in start
  self.manager.pre_start_hook()
File /opt/stack/nova/nova/compute/manager.py, line 1152, in pre_start_hook
  self.update_available_resource(nova.context.get_admin_context())
File /opt/stack/nova/nova/compute/manager.py, line 5946, in 
update_available_resource
  nodenames = set(self.driver.get_available_nodes())
File /opt/stack/nova/nova/virt/driver.py, line 1237, in 
get_available_nodes
  stats = self.get_host_stats(refresh=refresh)
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 5771, in 
get_host_stats
  return self.host_state.get_host_stats(refresh=refresh)
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 470, in host_state
  self._host_state = HostState(self)
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 6331, in __init__
  self.update_status()
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 6387, in 
update_status
  numa_topology = self.driver._get_host_numa_topology()
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 4828, in 
_get_host_numa_topology
  for cell in topology.cells])
  TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'
  2014-10-01 14:50:26.989 ^[[01;31mERROR nova.openstack.common.threadgroup 
[^[[00;36m-^[[01;31m] ^[[01;35m^[[01;31munsupported operand type(s) for /: 
'NoneType' and 'int'^[[00m


  Seems like the commit 
https://github.com/openstack/nova/commit/6a374f21495c12568e4754800574e6703a0e626f
  is the cause.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376307/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376492] Re: Minesweeper failure: tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_revert

2014-10-07 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376492

Title:
  Minesweeper failure:
  
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_revert

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Patch I7598afbf0dc3c527471af34224003d28e64daaff introduces a tempest
  failure with Minesweeper due to the fact that the destroy operation
  can be triggered by both the user and the revert resize operation. In
  case of a revert resize operation, we do not want to delete the
  original VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368032] Re: Add missing metadata definitions for Aggregate filters added in Juno

2014-10-07 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1368032

Title:
  Add missing metadata definitions for Aggregate filters added in Juno

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released

Bug description:
  The below spec implemented in Juno added numerous properties that can
  be set on host aggregates.  The Metadata Definitions catalog should
  include these properties.

  https://github.com/openstack/nova-specs/blob/master/specs/juno/per-
  aggregate-filters.rst

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1368032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367981] Re: Nova instance config drive Metadata Definition

2014-10-07 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1367981

Title:
  Nova instance config drive Metadata Definition

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released

Bug description:
  A nova Juno FFE landed to support setting the img_config_drive
  property on images to require images to be booted with a config drive.
  The Glance Metadata Definitions should include this property.

  See Nova Blueprint: https://blueprints.launchpad.net/nova/+spec
  /config-drive-image-property

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1367981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367619] Re: MetadefNamespace.namespaces column should indicate nullable=False

2014-10-07 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1367619

Title:
  MetadefNamespace.namespaces column should indicate nullable=False

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released

Bug description:
  The metadef_namespaces table definition indicates the namespace column
  as not accepting nulls. The related MetadefNamespace ORM class should
  also indicate that the namespace column does not accept nulls with
  nullable=False in the column definition.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1367619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329333] Re: BadRequest: Invalid volume: Volume status must be available or error

2014-10-07 Thread Ghanshyam Mann
Nova libvirt driver failing  to detach volume.

n-api logs-

2014-10-06 21:13:37.560 AUDIT nova.api.openstack.compute.contrib.volumes
[req-f87a213f-6677-4288-b91e-25769f55a2f3
TestEncryptedCinderVolumes-1235148374
TestEncryptedCinderVolumes-731143481] Detach volume
ec116004-afd7-4131-9ee8-02ab666ec7bd

---
c-api logs-

Begin detaching -

2014-10-06 21:13:37.864 18627 INFO cinder.api.openstack.wsgi 
[req-57cf26e1-8cdd-4e20-943c-393aba8286fd 980965010fee4b7f800ef366726b5927 
ba5e42d2f06340058633ad1a5a84b1b1 - - -] POST 
http://127.0.0.1:8776/v1/ba5e42d2f06340058633ad1a5a84b1b1/volumes/ec116004-afd7-4131-9ee8-02ab666ec7bd/action
2014-10-06 21:13:37.865 18627 DEBUG cinder.api.openstack.wsgi 
[req-57cf26e1-8cdd-4e20-943c-393aba8286fd 980965010fee4b7f800ef366726b5927 
ba5e42d2f06340058633ad1a5a84b1b1 - - -] Action body: {os-begin_detaching: 
null} get_method /opt/stack/new/cinder/cinder/api/openstack/wsgi.py:1008
-
Status changed to  Detaching - 

2014-10-06 21:13:38.078 18627 AUDIT cinder.api.v1.volumes 
[req-9b3ab70e-897b-4d27-80d1-89d5678a481f 980965010fee4b7f800ef366726b5927 
ba5e42d2f06340058633ad1a5a84b1b1 - - -] vol={'migration_status': None, 
'availability_zone': u'nova', 'terminated_at': None, 'updated_at': 
datetime.datetime(2014, 10, 6, 21, 13, 37), 'provider_geometry': None, 
'snapshot_id': None, 'ec2_id': None, 'mountpoint': u'/dev/vdb', 'deleted_at': 
None, 'id': u'ec116004-afd7-4131-9ee8-02ab666ec7bd', 'size': 1L, 'user_id': 
u'980965010fee4b7f800ef366726b5927', 'attach_time': 
u'2014-10-06T21:13:35.855790', 'attached_host': None, 'display_description': 
None, 'volume_admin_metadata': 
[cinder.db.sqlalchemy.models.VolumeAdminMetadata object at 0x4e80e90, 
cinder.db.sqlalchemy.models.VolumeAdminMetadata object at 0x5c1b250], 
'encryption_key_id': u'----', 'project_id': 
u'ba5e42d2f06340058633ad1a5a84b1b1', 'launched_at': datetime.datetime(2014, 10, 
6, 21, 13, 29), 'scheduled_at': datetime.d
 atetime(2014, 10, 6, 21, 13, 29), 'status': u'detaching', 'volume_type_id': 
u'03ce3467-70d3-442f-a93b-4bcba3ac662a', 'deleted': False, 'provider_location': 
--
n-cpu log-  error from driver while detaching volume

2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
133, in _dispatch_and_reply
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
176, in _dispatch
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
122, in _do_dispatch
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/exception.py, line 88, in wrapped
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher payload)
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/openstack/common/excutils.py, line 68, in __exit__
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/exception.py, line 71, in wrapped
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 274, in decorated_function
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher pass
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/openstack/common/excutils.py, line 68, in __exit__
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 260, in decorated_function
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-10-06 21:13:39.391 

[Yahoo-eng-team] [Bug 1329333] Re: BadRequest: Invalid volume: Volume status must be available or error

2014-10-07 Thread Ghanshyam Mann
Invalidating for Tempest.

** Changed in: tempest
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329333

Title:
  BadRequest: Invalid volume: Volume status must be available or error

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  traceback from:
  
http://logs.openstack.org/40/99540/2/check/check-grenade-dsvm/85c496c/console.html

  
  2014-06-12 13:28:15.833 | tearDownClass 
(tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern)
  2014-06-12 13:28:15.833 | 
---
  2014-06-12 13:28:15.833 | 
  2014-06-12 13:28:15.833 | Captured traceback:
  2014-06-12 13:28:15.833 | ~~~
  2014-06-12 13:28:15.833 | Traceback (most recent call last):
  2014-06-12 13:28:15.833 |   File tempest/scenario/manager.py, line 157, 
in tearDownClass
  2014-06-12 13:28:15.833 | cls.cleanup_resource(thing, cls.__name__)
  2014-06-12 13:28:15.834 |   File tempest/scenario/manager.py, line 119, 
in cleanup_resource
  2014-06-12 13:28:15.834 | resource.delete()
  2014-06-12 13:28:15.834 |   File 
/opt/stack/new/python-cinderclient/cinderclient/v1/volumes.py, line 35, in 
delete
  2014-06-12 13:28:15.834 | self.manager.delete(self)
  2014-06-12 13:28:15.834 |   File 
/opt/stack/new/python-cinderclient/cinderclient/v1/volumes.py, line 228, in 
delete
  2014-06-12 13:28:15.834 | self._delete(/volumes/%s % 
base.getid(volume))
  2014-06-12 13:28:15.834 |   File 
/opt/stack/new/python-cinderclient/cinderclient/base.py, line 162, in _delete
  2014-06-12 13:28:15.834 | resp, body = self.api.client.delete(url)
  2014-06-12 13:28:15.834 |   File 
/opt/stack/new/python-cinderclient/cinderclient/client.py, line 229, in delete
  2014-06-12 13:28:15.834 | return self._cs_request(url, 'DELETE', 
**kwargs)
  2014-06-12 13:28:15.834 |   File 
/opt/stack/new/python-cinderclient/cinderclient/client.py, line 187, in 
_cs_request
  2014-06-12 13:28:15.835 | **kwargs)
  2014-06-12 13:28:15.835 |   File 
/opt/stack/new/python-cinderclient/cinderclient/client.py, line 170, in 
request
  2014-06-12 13:28:15.835 | raise exceptions.from_response(resp, body)
  2014-06-12 13:28:15.835 | BadRequest: Invalid volume: Volume status must 
be available or error, but current status is: in-use (HTTP 400) (Request-ID: 
req-9337623a-e2b7-48a3-97ab-f7a4845f2cd8)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1329333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378317] [NEW] allow attaching a volume from the instance view

2014-10-07 Thread Matthias Runge
Public bug reported:

to attach a volume we need to go to the volumes tab and select an
instance in the edit attachment dialogue.

it would be nice, if we could associate a volume directly to the
instance from instances panel.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1378317

Title:
  allow attaching a volume from the instance view

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  to attach a volume we need to go to the volumes tab and select an
  instance in the edit attachment dialogue.

  it would be nice, if we could associate a volume directly to the
  instance from instances panel.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1378317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378319] [NEW] Routers auto-rescheduling does not handle rescheduling failures

2014-10-07 Thread Oleg Bondarev
Public bug reported:

In case there is no elegible l3 agent for the router, resheduling task will 
fail and exit, 
thus will not process other routers (if any) and will not reshedule routers 
when agents are back online.
Need to wrap self.reschedule_router() with try/except

** Affects: neutron
 Importance: Undecided
 Assignee: Oleg Bondarev (obondarev)
 Status: New


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1378319

Title:
  Routers auto-rescheduling does not handle rescheduling failures

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In case there is no elegible l3 agent for the router, resheduling task will 
fail and exit, 
  thus will not process other routers (if any) and will not reshedule routers 
when agents are back online.
  Need to wrap self.reschedule_router() with try/except

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1378319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1314677] Re: nova-cells fails when using JSON file to store cell information

2014-10-07 Thread Chris J Arges
Hello Liam, or anyone else affected,

Accepted nova into trusty-proposed. The package will build now and be
available at
http://launchpad.net/ubuntu/+source/nova/1:2014.1.3-0ubuntu1 in a few
hours, and then in the -proposed repository.

Please help us by testing this new package.  See
https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to
enable and use -proposed.  Your feedback will aid us getting this update
out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, and change the tag
from verification-needed to verification-done. If it does not fix the
bug for you, please add a comment stating that, and change the tag to
verification-failed.  In either case, details of your testing will help
us make a better decision.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification .  Thank you in
advance!

** Changed in: nova (Ubuntu)
   Status: New = Fix Released

** Changed in: nova (Ubuntu Trusty)
   Status: New = Fix Committed

** Tags added: verification-needed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1314677

Title:
  nova-cells fails when using JSON file to store cell information

Status in OpenStack Compute (Nova):
  Fix Released
Status in “nova” package in Ubuntu:
  Fix Released
Status in “nova” source package in Trusty:
  Fix Committed

Bug description:
  As recommended in http://docs.openstack.org/havana/config-
  reference/content/section_compute-cells.html#cell-config-optional-json
  I'm creating the nova-cells config with the cell information stored in
  a json file. However, when I do this nova-cells fails to start with
  this error in the logs:

  2014-04-29 11:52:05.240 16759 CRITICAL nova [-] __init__() takes exactly 3 
arguments (1 given)
  2014-04-29 11:52:05.240 16759 TRACE nova Traceback (most recent call last):
  2014-04-29 11:52:05.240 16759 TRACE nova   File /usr/bin/nova-cells, line 
10, in module
  2014-04-29 11:52:05.240 16759 TRACE nova sys.exit(main())
  2014-04-29 11:52:05.240 16759 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/cmd/cells.py, line 40, in main
  2014-04-29 11:52:05.240 16759 TRACE nova manager=CONF.cells.manager)
  2014-04-29 11:52:05.240 16759 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 257, in create
  2014-04-29 11:52:05.240 16759 TRACE nova db_allowed=db_allowed)
  2014-04-29 11:52:05.240 16759 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 139, in __init__
  2014-04-29 11:52:05.240 16759 TRACE nova self.manager = 
manager_class(host=self.host, *args, **kwargs)
  2014-04-29 11:52:05.240 16759 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/cells/manager.py, line 87, in __init__
  2014-04-29 11:52:05.240 16759 TRACE nova self.state_manager = 
cell_state_manager()
  2014-04-29 11:52:05.240 16759 TRACE nova TypeError: __init__() takes exactly 
3 arguments (1 given)

  
  I have had a dig into the code and it appears that CellsManager creates an 
instance of CellStateManager with no arguments. CellStateManager __new__ runs 
and creates an instance of CellStateManagerFile which runs __new__ and __init__ 
with cell_state_cls and cells_config_path set. At this point __new__ returns 
CellStateManagerFile and the new instance's __init__() method is invoked 
(CellStateManagerFile.__init__) with the original arguments (there weren't any) 
which then results in the stack trace.

  It seems reasonable for CellStateManagerFile to derive the
  cells_config_path info for itself so I've patched it locally with

  === modified file 'state.py'
  --- state.py  2014-04-30 15:10:16 +
  +++ state.py  2014-04-30 15:10:26 +
  @@ -155,7 +155,7 @@
   config_path = CONF.find_file(cells_config)
   if not config_path:
   raise 
cfg.ConfigFilesNotFoundError(config_files=[cells_config])
  -return CellStateManagerFile(cell_state_cls, config_path)
  +return CellStateManagerFile(cell_state_cls)
   
   return CellStateManagerDB(cell_state_cls)
   
  @@ -450,7 +450,9 @@
   
   
   class CellStateManagerFile(CellStateManager):
  -def __init__(self, cell_state_cls, cells_config_path):
  +def __init__(self, cell_state_cls=None):
  +cells_config = CONF.cells.cells_config
  +cells_config_path = CONF.find_file(cells_config)
   self.cells_config_path = cells_config_path
   super(CellStateManagerFile, self).__init__(cell_state_cls)
   

  
  Ubuntu: 14.04
  nova-cells: 1:2014.1-0ubuntu1

  nova.conf:

  [DEFAULT]
  dhcpbridge_flagfile=/etc/nova/nova.conf
  dhcpbridge=/usr/bin/nova-dhcpbridge
  logdir=/var/log/nova
  state_path=/var/lib/nova
  lock_path=/var/lock/nova
  

[Yahoo-eng-team] [Bug 1371559] Re: v2 image-update does not handle some schema properties properly

2014-10-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/126479
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=0f3b518028196b5c8c36b378928dae31c2c4a6fa
Submitter: Jenkins
Branch:proposed/juno

commit 0f3b518028196b5c8c36b378928dae31c2c4a6fa
Author: Kamil Rykowski kamil.rykow...@intel.com
Date:   Wed Sep 24 14:15:59 2014 +0200

Mark custom properties in image schema as non-base

Currently it is impossible to determine if given image schema property
is base or custom one and knowledge of that can be handy in some
situations.Proposed change appends to every custom property special
key which determiness that it is not a base property.

Change-Id: I49255255df311036d516768afc55475c1f9aad47
Partial-Bug: #1371559
(cherry picked from commit 94c05cbdbb3a78b3df4df8d522555f34d2f0a166)


** Changed in: glance
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1371559

Title:
  v2 image-update does not handle some schema properties properly

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Python client library for Glance:
  Fix Committed

Bug description:
  Step1: Create an empty image using api v1 or v2.
  Step2:  Use glanceclient with v2 API to update image property called 
architecture:

  glance --os-image-api-version 2 image-update --property
  architecture='x86'

  This will show following error message:

  html
    head
  title409 Conflict/title
    /head
    body
  h1409 Conflict/h1
  There was a conflict when trying to complete your request.br /br /
  Property architecture does not exist.

    /body
  /html (HTTP 409)

  The error shows up, because the client sends to glance API following
  data:

  [{patch: /architecture, value: x86, op: replace}]

  instead of

  [{patch: /architecture, value: x86, op: add}]

  The issue is somehow related to overridden patch method in
  SchemaBasedModel class, which propagates non existing properties from
  schema properties.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1371559/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378389] [NEW] os-interface:show will not handle PortNotFoundClient exception from neutron

2014-10-07 Thread Matt Riedemann
Public bug reported:

The os-interface:show method in the v2/v3 compute API is catching a
NotFound(NovaException):

http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/contrib/attach_interfaces.py?id=2014.2.rc1#n67

But when using the neutronv2 API, if you get a port not found it's going
to raise up a PortNotFoundClient(NeutronClientException), which won't be
handled by the NotFound(NovaException) in the compute API since it's not
the same type of exception.

http://git.openstack.org/cgit/openstack/nova/tree/nova/network/neutronv2/api.py?id=2014.2.rc1#n584

This bug has two parts:

1. The neutronv2 API show_port method needs to return nova exceptions,
not neutron client exceptions.

2. The os-interfaces:show v2/v3 APIs need to handle the exceptions (404
is handled, but neutron can also raise Forbidden/Unauthorized which the
compute API isn't handling).

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: In Progress


** Tags: api network

** Changed in: nova
   Status: New = In Progress

** Changed in: nova
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378389

Title:
  os-interface:show will not handle PortNotFoundClient exception from
  neutron

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The os-interface:show method in the v2/v3 compute API is catching a
  NotFound(NovaException):

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/contrib/attach_interfaces.py?id=2014.2.rc1#n67

  But when using the neutronv2 API, if you get a port not found it's
  going to raise up a PortNotFoundClient(NeutronClientException), which
  won't be handled by the NotFound(NovaException) in the compute API
  since it's not the same type of exception.

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/network/neutronv2/api.py?id=2014.2.rc1#n584

  This bug has two parts:

  1. The neutronv2 API show_port method needs to return nova exceptions,
  not neutron client exceptions.

  2. The os-interfaces:show v2/v3 APIs need to handle the exceptions
  (404 is handled, but neutron can also raise Forbidden/Unauthorized
  which the compute API isn't handling).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378389/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378388] [NEW] Performance regression uploading images to glance in juno

2014-10-07 Thread James Page
Public bug reported:

Testing: 1:2014.2~rc1-0ubuntu1

Uploads of standard ubuntu images to glance, backed by ceph, are 10x
slower than on icehouse on the same infrastructure.  With icehouse i saw
around 200MBps, with juno around 20Mbps.

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: glance (Ubuntu)
 Importance: Undecided
 Status: New

** Also affects: glance
   Importance: Undecided
   Status: New

** Summary changed:

- Performance regression uploading images to glance
+ Performance regression uploading images to glance in juno

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1378388

Title:
  Performance regression uploading images to glance in juno

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in “glance” package in Ubuntu:
  New

Bug description:
  Testing: 1:2014.2~rc1-0ubuntu1

  Uploads of standard ubuntu images to glance, backed by ceph, are 10x
  slower than on icehouse on the same infrastructure.  With icehouse i
  saw around 200MBps, with juno around 20Mbps.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1378388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378395] [NEW] Slow MySQL queries with lots of deleted instances

2014-10-07 Thread Johannes Erdfelt
Public bug reported:

While analyzing the slow query log in our public cloud, we ran across
this slow query:

# Query_time: 21.113669  Lock_time: 0.000485 Rows_sent: 46  Rows_examined: 
848516
SET timestamp=1412484367;
SELECT anon_1.instances_created_at AS anon_1_instances_created_at, 
anon_1.instances_updated_at AS anon_1_instances_updated_at, 
anon_1.instances_deleted_at AS anon_1_instances_deleted_at, 
anon_1.instances_deleted AS anon_1_instances_deleted, anon_1.instances_id AS 
anon_1_instances_id, anon_1.instances_user_id AS anon_1_instances_user_id, 
anon_1.instances_project_id AS anon_1_instances_project_id, 
anon_1.instances_image_ref AS anon_1_instances_image_ref, 
anon_1.instances_kernel_id AS anon_1_instances_kernel_id, 
anon_1.instances_ramdisk_id AS anon_1_instances_ramdisk_id, 
anon_1.instances_hostname AS anon_1_instances_hostname, 
anon_1.instances_launch_index AS anon_1_instances_launch_index, 
anon_1.instances_key_name AS anon_1_instances_key_name, 
anon_1.instances_key_data AS anon_1_instances_key_data, 
anon_1.instances_power_state AS anon_1_instances_power_state, 
anon_1.instances_vm_state AS anon_1_instances_vm_state, 
anon_1.instances_task_state AS anon_1_instances_task_state, anon_1.instan
 ces_memory_mb AS anon_1_instances_memory_mb, anon_1.instances_vcpus AS 
anon_1_instances_vcpus, anon_1.instances_root_gb AS anon_1_instances_root_gb, 
anon_1.instances_ephemeral_gb AS anon_1_instances_ephemeral_gb, 
anon_1.instances_ephemeral_key_uuid AS anon_1_instances_ephemeral_key_uuid, 
anon_1.instances_host AS anon_1_instances_host, anon_1.instances_node AS 
anon_1_instances_node, anon_1.instances_instance_type_id AS 
anon_1_instances_instance_type_id, anon_1.instances_user_data AS 
anon_1_instances_user_data, anon_1.instances_reservation_id AS 
anon_1_instances_reservation_id, anon_1.instances_scheduled_at AS 
anon_1_instances_scheduled_at, anon_1.instances_launched_at AS 
anon_1_instances_launched_at, anon_1.instances_terminated_at AS 
anon_1_instances_terminated_at, anon_1.instances_availability_zone AS 
anon_1_instances_availability_zone, anon_1.instances_display_name AS 
anon_1_instances_display_name, anon_1.instances_display_description AS 
anon_1_instances_display_description, anon_1
 .instances_launched_on AS anon_1_instances_launched_on, 
anon_1.instances_locked AS anon_1_instances_locked, anon_1.instances_locked_by 
AS anon_1_instances_locked_by, anon_1.instances_os_type AS 
anon_1_instances_os_type, anon_1.instances_architecture AS 
anon_1_instances_architecture, anon_1.instances_vm_mode AS 
anon_1_instances_vm_mode, anon_1.instances_uuid AS anon_1_instances_uuid, 
anon_1.instances_root_device_name AS anon_1_instances_root_device_name, 
anon_1.instances_default_ephemeral_device AS 
anon_1_instances_default_ephemeral_device, anon_1.instances_default_swap_device 
AS anon_1_instances_default_swap_device, anon_1.instances_config_drive AS 
anon_1_instances_config_drive, anon_1.instances_access_ip_v4 AS 
anon_1_instances_access_ip_v4, anon_1.instances_access_ip_v6 AS 
anon_1_instances_access_ip_v6, anon_1.instances_auto_disk_config AS 
anon_1_instances_auto_disk_config, anon_1.instances_progress AS 
anon_1_instances_progress, anon_1.instances_shutdown_terminate AS anon_1_instanc
 es_shutdown_terminate, anon_1.instances_disable_terminate AS 
anon_1_instances_disable_terminate, anon_1.instances_cell_name AS 
anon_1_instances_cell_name, anon_1.instances_internal_id AS 
anon_1_instances_internal_id, anon_1.instances_cleaned AS 
anon_1_instances_cleaned, security_groups_1.created_at AS 
security_groups_1_created_at, security_groups_1.updated_at AS 
security_groups_1_updated_at, security_groups_1.deleted_at AS 
security_groups_1_deleted_at, security_groups_1.deleted AS 
security_groups_1_deleted, security_groups_1.id AS security_groups_1_id, 
security_groups_1.name AS security_groups_1_name, security_groups_1.description 
AS security_groups_1_description, security_groups_1.user_id AS 
security_groups_1_user_id, security_groups_1.project_id AS 
security_groups_1_project_id, instance_info_caches_1.created_at AS 
instance_info_caches_1_created_at, instance_info_caches_1.updated_at AS 
instance_info_caches_1_updated_at, instance_info_caches_1.deleted_at AS 
instance_info_caches_1_de
 leted_at, instance_info_caches_1.deleted AS instance_info_caches_1_deleted, 
instance_info_caches_1.id AS instance_info_caches_1_id, 
instance_info_caches_1.network_info AS instance_info_caches_1_network_info, 
instance_info_caches_1.instance_uuid AS instance_info_caches_1_instance_uuid
FROM (SELECT instances.created_at AS instances_created_at, instances.updated_at 
AS instances_updated_at, instances.deleted_at AS instances_deleted_at, 
instances.deleted AS instances_deleted, instances.id AS instances_id, 
instances.user_id AS instances_user_id, instances.project_id AS 
instances_project_id, instances.image_ref AS instances_image_ref, 
instances.kernel_id AS instances_kernel_id, instances.ramdisk_id AS 

[Yahoo-eng-team] [Bug 1378398] [NEW] Remove legacy weight from l3 agent _process_routers

2014-10-07 Thread Carl Baldwin
Public bug reported:

Some work in Juno around adding a new router processing queue to the
l3_agent.py obsoleted much of the logic in the _process_routers method.
The following can be simplified.

1. No loop is necessary since the list passed always has exactly one router in 
it.
2. No thread pool is necessary because there is only one thread active and the 
method waits for it to complete at the end.
3. The set logic is no longer needed.

** Affects: neutron
 Importance: Wishlist
 Assignee: Carl Baldwin (carl-baldwin)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1378398

Title:
  Remove legacy weight from l3 agent _process_routers

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Some work in Juno around adding a new router processing queue to the
  l3_agent.py obsoleted much of the logic in the _process_routers
  method.  The following can be simplified.

  1. No loop is necessary since the list passed always has exactly one router 
in it.
  2. No thread pool is necessary because there is only one thread active and 
the method waits for it to complete at the end.
  3. The set logic is no longer needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1378398/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378388] Re: Performance regression uploading images to glance in juno

2014-10-07 Thread James Page
I think the problem is that glance is using a tiny calculated chunk size
(from python-glance-store):

chunk = self.conf.glance_store.rbd_store_chunk_size
self.chunk_size = chunk * (1024 ^ 2)

this should be (from original glance rbd driver):

   1024 ** 2

Resulting in alot of tiny chucked writes instead of the default 8MB
writes.

** Also affects: python-glance-store (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: glance (Ubuntu)
 Assignee: (unassigned) = James Page (james-page)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1378388

Title:
  Performance regression uploading images to glance in juno

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in “glance” package in Ubuntu:
  Invalid
Status in “python-glance-store” package in Ubuntu:
  New

Bug description:
  Testing: 1:2014.2~rc1-0ubuntu1

  Uploads of standard ubuntu images to glance, backed by ceph, are 10x
  slower than on icehouse on the same infrastructure.  With icehouse i
  saw around 200MBps, with juno around 20Mbps.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1378388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378388] Re: Performance regression uploading images to glance in juno

2014-10-07 Thread James Page
I see this is fixed already in glance_store; hopefully there will be
another release soon, in the meantime I'll cherry-pick the commit that
fixes this issue.

** Changed in: glance (Ubuntu)
 Assignee: James Page (james-page) = (unassigned)

** Changed in: glance (Ubuntu)
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1378388

Title:
  Performance regression uploading images to glance in juno

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in “glance” package in Ubuntu:
  Invalid
Status in “python-glance-store” package in Ubuntu:
  New

Bug description:
  Testing: 1:2014.2~rc1-0ubuntu1

  Uploads of standard ubuntu images to glance, backed by ceph, are 10x
  slower than on icehouse on the same infrastructure.  With icehouse i
  saw around 200MBps, with juno around 20Mbps.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1378388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378402] [NEW] Wrong version links href: https replaced by http

2014-10-07 Thread goldyfruit
Public bug reported:

Hi,

I use Keystone with SSL, so i should have a href link with https. The
problem is that URL is in http, no many clients like Neutron or Heat
failed.

We are using RHEL 7 with OSP 5 (IceHouse)

# curl -s -XGET https://ca.ilovepopcorn.com:5000/v2.0 | json_pp

Result:

{
   version : {
  media-types : [
 {
base : application/json,
type : application/vnd.openstack.identity-v2.0+json
 },
 {
base : application/xml,
type : application/vnd.openstack.identity-v2.0+xml
 }
  ],
  status : stable,
  updated : 2014-04-17T00:00:00Z,
  links : [
 {
rel : self,
href : http://ca.ilovepopcorn.com:5000/v2.0/;
 },
 {
href : 
http://docs.openstack.org/api/openstack-identity-service/2.0/content/;,
type : text/html,
rel : describedby
 },
 {
href : 
http://docs.openstack.org/api/openstack-identity-service/2.0/identity-dev-guide-2.0.pdf;,
type : application/pdf,
rel : describedby
 }
  ],
  id : v2.0
   }
}

Thanks for your help :)

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1378402

Title:
  Wrong version links href: https replaced by http

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Hi,

  I use Keystone with SSL, so i should have a href link with https. The
  problem is that URL is in http, no many clients like Neutron or Heat
  failed.

  We are using RHEL 7 with OSP 5 (IceHouse)

  # curl -s -XGET https://ca.ilovepopcorn.com:5000/v2.0 | json_pp

  Result:

  {
 version : {
media-types : [
   {
  base : application/json,
  type : application/vnd.openstack.identity-v2.0+json
   },
   {
  base : application/xml,
  type : application/vnd.openstack.identity-v2.0+xml
   }
],
status : stable,
updated : 2014-04-17T00:00:00Z,
links : [
   {
  rel : self,
  href : http://ca.ilovepopcorn.com:5000/v2.0/;
   },
   {
  href : 
http://docs.openstack.org/api/openstack-identity-service/2.0/content/;,
  type : text/html,
  rel : describedby
   },
   {
  href : 
http://docs.openstack.org/api/openstack-identity-service/2.0/identity-dev-guide-2.0.pdf;,
  type : application/pdf,
  rel : describedby
   }
],
id : v2.0
 }
  }

  Thanks for your help :)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1378402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370265] Re: Crash on describing EC2 volume backed image with multiple devices

2014-10-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/126520
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f98c28228b6db5b0796e9669b6bd692b82bbfa6d
Submitter: Jenkins
Branch:proposed/juno

commit f98c28228b6db5b0796e9669b6bd692b82bbfa6d
Author: liyingjun liyingjun1...@gmail.com
Date:   Sat Sep 6 18:41:51 2014 +0800

Fix KeyError for euca-describe-images

EC2 describe images crashes on volume backed instance snapshot which has
several volumes.

Change-Id: Ibe278688b118db01c9c3ae1763954adf19c7ee0d
Closes-bug: #1370265
(cherry picked from commit 1dea1cd710d54d4a2a584590e4ccf59dd3a41faa)


** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370265

Title:
  Crash on describing EC2 volume backed image with multiple devices

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  EC2 describe images crashes on volume backed instance snapshot which
  has several volumes:

  $ euca-describe-images
  euca-describe-images: error (KeyError): Unknown error occurred.

  Steps to reproduce
  1 Create bootable volume
  $ cinder create --image image-id size

  2 Boot instance from volume
  $ nova boot --flavor m1.nano --block-device-mapping /dev/vda=volume_id:::1 
inst

  3 Create empty volume
  $ cinder create 1

  4 Attach the volume to the instance
  $ nova volume-attach inst empty-volume-id /dev/vdd

  5 Create volume backed snapshot
  $ nova image-create inst sn-in

  6 Describe EC2 images
  $ euca-describe-images

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 753280] Re: We should use policy routing for VM's.

2014-10-07 Thread Tom Fifield
** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/753280

Title:
  We should use policy routing for VM's.

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  We currently modify the host's default routing table. We should leave
  that alone and apply a different routing table for VM's.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/753280/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1233259] Re: Midonet plugin clean up dhcp correctly

2014-10-07 Thread Tom Fifield
** Changed in: neutron
   Status: In Progress = Confirmed

** Changed in: neutron
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1233259

Title:
  Midonet plugin clean up dhcp correctly

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  Midonet plugin. When a subnet is deleted clean the proper dhcp entry,
  not always the first one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1233259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 884479] Re: Ability to display just a single enlarged table...

2014-10-07 Thread Tom Fifield
Based on the comment from 2012, and the complete lack of activity, I'm
going to mark this up as Opinion.

** Changed in: horizon
   Status: Triaged = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/884479

Title:
  Ability to display just a single enlarged table...

Status in OpenStack Dashboard (Horizon):
  Opinion

Bug description:
  Related to blueprint: improve-user-experience.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/884479/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378402] Re: Wrong version links href: https replaced by http

2014-10-07 Thread goldyfruit
@Chmouel: You're right ! I fixed the public_endpoint and now it works !

Thanks

** Changed in: keystone
   Status: New = Fix Released

** Changed in: keystone
 Assignee: (unassigned) = goldyfruit (goldyfruit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1378402

Title:
  Wrong version links href: https replaced by http

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  Hi,

  I use Keystone with SSL, so i should have a href link with https. The
  problem is that URL is in http, no many clients like Neutron or Heat
  failed.

  We are using RHEL 7 with OSP 5 (IceHouse)

  # curl -s -XGET https://ca.ilovepopcorn.com:5000/v2.0 | json_pp

  Result:

  {
 version : {
media-types : [
   {
  base : application/json,
  type : application/vnd.openstack.identity-v2.0+json
   },
   {
  base : application/xml,
  type : application/vnd.openstack.identity-v2.0+xml
   }
],
status : stable,
updated : 2014-04-17T00:00:00Z,
links : [
   {
  rel : self,
  href : http://ca.ilovepopcorn.com:5000/v2.0/;
   },
   {
  href : 
http://docs.openstack.org/api/openstack-identity-service/2.0/content/;,
  type : text/html,
  rel : describedby
   },
   {
  href : 
http://docs.openstack.org/api/openstack-identity-service/2.0/identity-dev-guide-2.0.pdf;,
  type : application/pdf,
  rel : describedby
   }
],
id : v2.0
 }
  }

  Thanks for your help :)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1378402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378459] [NEW] Missing info_cache.save() in db sqlalchemy api

2014-10-07 Thread Nathanael Burton
Public bug reported:

Missing network information (nw_info) stored in the
'instance_info_caches' DB table was failing to be successfully healed by
the _heal_instance_info_cache() periodic task.  The periodic task
correctly fires and gets the nw_info to update, but the DB call to
instance_info_cache_update() fails to save the values.

Correctly handle both updating and adding missing nw_info.

** Affects: nova
 Importance: Undecided
 Assignee: Nathanael Burton (mathrock)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Nathanael Burton (mathrock)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378459

Title:
  Missing info_cache.save() in db sqlalchemy api

Status in OpenStack Compute (Nova):
  New

Bug description:
  Missing network information (nw_info) stored in the
  'instance_info_caches' DB table was failing to be successfully healed
  by the _heal_instance_info_cache() periodic task.  The periodic task
  correctly fires and gets the nw_info to update, but the DB call to
  instance_info_cache_update() fails to save the values.

  Correctly handle both updating and adding missing nw_info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378459/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378461] [NEW] nova.objects.network_request.NetworkRequest's version is incorrect

2014-10-07 Thread Jay Pipes
Public bug reported:

from nova/objects/network_request.py:

class NetworkRequest(obj_base.NovaObject):
# Version 1.0: Initial version
# Version 1.1: Added pci_request_id
VERSION = '1.0'

VERSION should be 1.1, per the comment above it.

** Affects: nova
 Importance: Medium
 Status: Triaged


** Tags: juno-rc-potential low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378461

Title:
  nova.objects.network_request.NetworkRequest's version is incorrect

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  from nova/objects/network_request.py:

  class NetworkRequest(obj_base.NovaObject):
  # Version 1.0: Initial version
  # Version 1.1: Added pci_request_id
  VERSION = '1.0'

  VERSION should be 1.1, per the comment above it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378452] [NEW] fix tiny gap in navigation sidebar

2014-10-07 Thread Cindy Lu
Public bug reported:

see image

** Affects: horizon
 Importance: Undecided
 Assignee: Cindy Lu (clu-m)
 Status: New

** Attachment added: Untitled.png
   
https://bugs.launchpad.net/bugs/1378452/+attachment/4227664/+files/Untitled.png

** Changed in: horizon
 Assignee: (unassigned) = Cindy Lu (clu-m)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1378452

Title:
  fix tiny gap in navigation sidebar

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  see image

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1378452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378388] Re: Performance regression uploading images to glance in juno

2014-10-07 Thread Launchpad Bug Tracker
This bug was fixed in the package python-glance-store - 0.1.8-1ubuntu1

---
python-glance-store (0.1.8-1ubuntu1) utopic; urgency=medium

  * d/p/fix-rbd-chunk-size.patch: Cherry pick fix from upstream VCS to
correctly calculate RBD chunk size, resolving performance regression
with rbd backend (LP: #1378388).
 -- James Page james.p...@ubuntu.com   Tue, 07 Oct 2014 16:08:15 +0100

** Changed in: python-glance-store (Ubuntu)
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1378388

Title:
  Performance regression uploading images to glance in juno

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in “glance” package in Ubuntu:
  Invalid
Status in “python-glance-store” package in Ubuntu:
  Fix Released

Bug description:
  Testing: 1:2014.2~rc1-0ubuntu1

  Uploads of standard ubuntu images to glance, backed by ceph, are 10x
  slower than on icehouse on the same infrastructure.  With icehouse i
  saw around 200MBps, with juno around 20Mbps.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1378388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378468] [NEW] DBDuplicateError found sometimes when router_interface_delete issued with DVR

2014-10-07 Thread Swaminathan Vasudevan
Public bug reported:

When a router_interface_delete is called, this calls the
schedule_snat_router that causes the DBDuplicateError when it  is
trying to bind the router twice.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

** Tags added: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1378468

Title:
  DBDuplicateError found sometimes when router_interface_delete issued
  with DVR

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When a router_interface_delete is called, this calls the
  schedule_snat_router that causes the DBDuplicateError when it  is
  trying to bind the router twice.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1378468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373430] Re: Error while compressing files

2014-10-07 Thread James Slagle
** Changed in: tripleo
   Status: Triaged = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1373430

Title:
  Error while compressing files

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  All ci jobs failing

  Earliest Failure : 2014-09-24 09:51:55 UTC
  Example : 
http://logs.openstack.org/50/123150/3/check-tripleo/check-tripleo-ironic-undercloud-precise-nonha/3c60b32/console.html

  
  Sep 24 11:51:43 overcloud-controller0-dxjfgv3agarr os-collect-config[724]: 
dib-run-parts Wed Sep 24 11:51:43 UTC 2014 Running 
/opt/stack/os-config-refresh/post-configure.d/14-horizon
  Sep 24 11:51:53 overcloud-controller0-dxjfgv3agarr os-collect-config[724]: 
CommandError: An error occured during rendering 
/opt/stack/venvs/openstack/lib/python2.7/site-packages/horizon/templates/horizon/_scripts.html:
 'horizon/lib/bootstrap_datepicker/locales/bootstrap-datepicker..js' could not 
be found in the COMPRESS_ROOT 
'/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/static'
 or with staticfiles.
  Sep 24 11:51:53 overcloud-controller0-dxjfgv3agarr os-collect-config[724]: 
Found 'compress' tags in:
  Sep 24 11:51:53 overcloud-controller0-dxjfgv3agarr os-collect-config[724]: 
/opt/stack/venvs/openstack/lib/python2.7/site-packages/horizon/templates/horizon/_scripts.html
  Sep 24 11:51:53 overcloud-controller0-dxjfgv3agarr os-collect-config[724]: 
/opt/stack/venvs/openstack/lib/python2.7/site-packages/horizon/templates/horizon/_conf.html
  Sep 24 11:51:53 overcloud-controller0-dxjfgv3agarr os-collect-config[724]: 
/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/templates/_stylesheets.html
  Sep 24 11:51:53 overcloud-controller0-dxjfgv3agarr os-collect-config[724]: 
Compressing... [2014-09-24 11:51:53,459] (os-refresh-config) [ERROR] during 
post-configure phase. [Command '['dib-run-parts', 
'/opt/stack/os-config-refresh/post-configure.d']' returned non-zero exit status 
1]

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1373430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378508] [NEW] KeyError in DHPC RPC when port_update happens.- this is seen when a delete_port event occurs

2014-10-07 Thread Swaminathan Vasudevan
Public bug reported:

When there is a delete_port event occassionally we are seeing a TRACE in
dhcp_rpc.py file.

2014-10-07 12:31:39.803 DEBUG neutron.api.rpc.handlers.dhcp_rpc 
[req-803de1d2-a128-41f1-8686-2bec72c61f5a None None] Update dhcp port {u'port': 
{u'network_id': u'12548499-8387-480e-b29c-625dbf320ecf', u'fixed_ips': 
[{u'subnet_id': u'88031ffe-9149-4e96-a022-65468f6bcc0e'}]}} from ubuntu. from 
(pid=4414) update_dhcp_port 
/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py:290
2014-10-07 12:31:39.803 DEBUG neutron.openstack.common.lockutils 
[req-803de1d2-a128-41f1-8686-2bec72c61f5a None None] Got semaphore db-access 
from (pid=4414) lock 
/opt/stack/neutron/neutron/openstack/common/lockutils.py:168
2014-10-07 12:31:39.832 ERROR oslo.messaging.rpc.dispatcher 
[req-803de1d2-a128-41f1-8686-2bec72c61f5a None None] Exception during message 
handling: 'network_id'
2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py, line 294, in 
update_dhcp_port
2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher 'update_port')
2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py, line 81, in 
_port_action
2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher net_id = 
port['port']['network_id']
2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher KeyError: 
'network_id'
2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher 
2014-10-07 12:31:39.833 ERROR oslo.messaging._drivers.common 
[req-803de1d2-a128-41f1-8686-2bec72c61f5a None None] Returning exception 
'network_id' to caller
2014-10-07 12:31:39.833 ERROR oslo.messaging._drivers.common 
[req-803de1d2-a128-41f1-8686-2bec72c61f5a None None] ['Traceback (most recent 
call last):\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply\nincoming.message))\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, 
args)\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File 
/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py, line 294, in 
update_dhcp_port\n\'update_port\')\n', '  File 
/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py, line 81, in 
_port_action\nnet_id = port[\'port\'][\'network_id\']\n', KeyError: 
'network_id'\n]
2014-10-07 12:31:39.839 DEBUG neutron.context 
[req-7d40234b-6e11-4645-9bab-8f9958df5064 None None] Arguments dropped when 
creating context: {u'project_name': None, u'tenant': None} from (pid=4414) 
__init__ /opt/stack/neutron/neutron/context.py:83

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1378508

Title:
  KeyError in DHPC RPC when port_update happens.- this is seen when a
  delete_port event occurs

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When there is a delete_port event occassionally we are seeing a TRACE
  in dhcp_rpc.py file.

  2014-10-07 12:31:39.803 DEBUG neutron.api.rpc.handlers.dhcp_rpc 
[req-803de1d2-a128-41f1-8686-2bec72c61f5a None None] Update dhcp port {u'port': 
{u'network_id': u'12548499-8387-480e-b29c-625dbf320ecf', u'fixed_ips': 
[{u'subnet_id': u'88031ffe-9149-4e96-a022-65468f6bcc0e'}]}} from ubuntu. from 
(pid=4414) update_dhcp_port 
/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py:290
  2014-10-07 12:31:39.803 DEBUG neutron.openstack.common.lockutils 
[req-803de1d2-a128-41f1-8686-2bec72c61f5a None None] Got semaphore db-access 
from (pid=4414) lock 
/opt/stack/neutron/neutron/openstack/common/lockutils.py:168
  2014-10-07 12:31:39.832 ERROR oslo.messaging.rpc.dispatcher 

[Yahoo-eng-team] [Bug 1378510] [NEW] creating snapshot

2014-10-07 Thread Bobby Yakovich
Public bug reported:

If this needs to be under NOVA please advise.

Running ice house on ubuntu 14.04
When snapshot is created by a user in a project, the snapshot is not visible.
Determined snapshot gets listed only to admin system panel, and must be made 
public for anyone else to see it.
Should be listed to users project only, that created the snapshot.

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: glance snapshot

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1378510

Title:
  creating snapshot

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  If this needs to be under NOVA please advise.

  Running ice house on ubuntu 14.04
  When snapshot is created by a user in a project, the snapshot is not visible.
  Determined snapshot gets listed only to admin system panel, and must be made 
public for anyone else to see it.
  Should be listed to users project only, that created the snapshot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1378510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378514] [NEW] Allow setting max downtime for libvirt live migrations

2014-10-07 Thread Chris St. Pierre
Public bug reported:

As of libvirt 1.2.9, the maximum downtime for a live migration is
tunable during a migration, so it doesn't require any threading
foolishness. We should make this configurable in nova.conf so that large
instances can be migrated across relatively smaller network pipes.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378514

Title:
  Allow setting max downtime for libvirt live migrations

Status in OpenStack Compute (Nova):
  New

Bug description:
  As of libvirt 1.2.9, the maximum downtime for a live migration is
  tunable during a migration, so it doesn't require any threading
  foolishness. We should make this configurable in nova.conf so that
  large instances can be migrated across relatively smaller network
  pipes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378514/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378525] [NEW] Broken L3 HA migration should be blocked

2014-10-07 Thread Assaf Muller
Public bug reported:

While the HA property is update-able, and resulting router-get
invocations suggest that the router is HA, the migration
itself fails on the agent. This is deceiving and confusing
and should be blocked until the migration itself is fixed
in a future patch.

** Affects: neutron
 Importance: Undecided
 Assignee: Assaf Muller (amuller)
 Status: In Progress


** Tags: juno-rc-potential

** Changed in: neutron
 Assignee: (unassigned) = Assaf Muller (amuller)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1378525

Title:
  Broken L3 HA migration should be blocked

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  While the HA property is update-able, and resulting router-get
  invocations suggest that the router is HA, the migration
  itself fails on the agent. This is deceiving and confusing
  and should be blocked until the migration itself is fixed
  in a future patch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1378525/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378532] [NEW] Keystone token date format is inconsistent

2014-10-07 Thread Haneef Ali
Public bug reported:

issued_at field is only in v3, but v2 token response has issued_at. This
is not a major issue.   But the format of the date is inconsistent


token: {
expires: 2014-10-08T00:51:35Z,
id: a94eec3993a74bf4b26f91bd485f3b6d,
issued_at: 2014-10-07T20:51:36.005469,

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1378532

Title:
  Keystone token date format is inconsistent

Status in OpenStack Identity (Keystone):
  New

Bug description:
  issued_at field is only in v3, but v2 token response has issued_at.
  This is not a major issue.   But the format of the date is
  inconsistent

  
  token: {
  expires: 2014-10-08T00:51:35Z,
  id: a94eec3993a74bf4b26f91bd485f3b6d,
  issued_at: 2014-10-07T20:51:36.005469,

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1378532/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377981] Re: Missing fix for ssh_execute (Exceptions thrown may contain passwords) (CVE-2014-7230, CVE-2014-7231)

2014-10-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/126592
Committed: 
https://git.openstack.org/cgit/openstack/cinder/commit/?id=d5efe6703297761215907eeaf703cec040e6ad25
Submitter: Jenkins
Branch:proposed/juno

commit d5efe6703297761215907eeaf703cec040e6ad25
Author: Tristan Cacqueray tristan.cacque...@enovance.com
Date:   Fri Oct 3 19:57:01 2014 +

Sync latest processutils from oslo-incubator

An earlier commit (Ia92aab76fa83d01c5fbf6f9d31df2463fc26ba5c) failed
to address ssh_execute(). This change set addresses ssh_execute.



oslo-incubator head:

commit 4990535fb5f3e2dc9b397e1a18c1b5dda94ef1c4
Merge: 9f5c700 2a130bf
Author: Jenkins jenk...@review.openstack.org
Date:   Mon Sep 29 23:12:14 2014 +

Merge Script to list unreleased changes in all oslo projects

---

The sync pulls in the following changes (newest to oldest):

6a60f842 - Mask passwords in exceptions and error messages (SSH)

---

Change-Id: Ie0caf32469126dd9feb44867adf27acb6e383958
Closes-Bug: #1377981
(cherry picked from commit 5e4e1f7ea71f9b4c7bd15809c58bc7a1838ed567)


** Changed in: cinder
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1377981

Title:
  Missing fix for ssh_execute (Exceptions thrown may contain passwords)
  (CVE-2014-7230, CVE-2014-7231)

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  In Progress
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  New
Status in The Oslo library incubator:
  Fix Released
Status in oslo-incubator icehouse series:
  New
Status in OpenStack Security Advisories:
  In Progress

Bug description:
  Former bugs:
https://bugs.launchpad.net/ossa/+bug/1343604
https://bugs.launchpad.net/ossa/+bug/1345233

  The ssh_execute method is still affected in Cinder and Nova Icehouse release.
  It is prone to password leak if:
  - passwords are used on the command line
  - execution fail
  - calling code catch and log the exception

  The missing fix from oslo-incubator to be merged is:
  6a60f84258c2be3391541dbe02e30b8e836f6c22

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1377981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378532] Re: Keystone token date format is inconsistent

2014-10-07 Thread Dolph Mathews
This is unfortunately true, but we can't change date formats as it would
be considered an API backwards incompatibility. Hopefully we've made v3
very consistent!

** Changed in: keystone
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1378532

Title:
  Keystone token date format is inconsistent

Status in OpenStack Identity (Keystone):
  Won't Fix

Bug description:
  issued_at field is only in v3, but v2 token response has issued_at.
  This is not a major issue.   But the format of the date is
  inconsistent

  
  token: {
  expires: 2014-10-08T00:51:35Z,
  id: a94eec3993a74bf4b26f91bd485f3b6d,
  issued_at: 2014-10-07T20:51:36.005469,

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1378532/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378548] [NEW] javascript crashes on project create

2014-10-07 Thread oleksii
Public bug reported:

*stable-havana in firefox 24 on windows7*
on create project in horizon javascript terminates script execution.
we have about 180+ users in cloud and 500+ roles
I tried to debug with firebug and it showed that we got response from server on 
url */projects/create
response contains 125 kilobytes of raw html 
(quick html analysis showed that it lists all roles with all users as options 
so it has at least 9 potential DOM elements)
I think that response handler cannot append raw html to DOM because of its 
amount of data.
Probable solution is to get json data from server and to use javascript 
template engine

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: dom javascript

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1378548

Title:
  javascript crashes on project create

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  *stable-havana in firefox 24 on windows7*
  on create project in horizon javascript terminates script execution.
  we have about 180+ users in cloud and 500+ roles
  I tried to debug with firebug and it showed that we got response from server 
on url */projects/create
  response contains 125 kilobytes of raw html 
  (quick html analysis showed that it lists all roles with all users as options 
so it has at least 9 potential DOM elements)
  I think that response handler cannot append raw html to DOM because of its 
amount of data.
  Probable solution is to get json data from server and to use javascript 
template engine

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1378548/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378558] [NEW] Plugin panel not listed in configured panel group

2014-10-07 Thread Janet Yu
Public bug reported:

When adding panel Foo to the Admin dashboard's System panel group via
the openstack_dashboard/local/enabled/ directory, with something like:

PANEL = 'foo'
PANEL_DASHBOARD = 'admin'
PANEL_GROUP = 'admin'
ADD_PANEL = 'openstack_dashboard.dashboards.admin.foo.panel.Foo'

Foo appears under the panel group Other instead of System. This is the
error in the Apache log:

Could not process panel foo: 'tuple' object has no attribute 'append'

** Affects: horizon
 Importance: Undecided
 Assignee: Janet Yu (jwy)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Janet Yu (jwy)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1378558

Title:
  Plugin panel not listed in configured panel group

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When adding panel Foo to the Admin dashboard's System panel group via
  the openstack_dashboard/local/enabled/ directory, with something like:

  PANEL = 'foo'
  PANEL_DASHBOARD = 'admin'
  PANEL_GROUP = 'admin'
  ADD_PANEL = 'openstack_dashboard.dashboards.admin.foo.panel.Foo'

  Foo appears under the panel group Other instead of System. This is the
  error in the Apache log:

  Could not process panel foo: 'tuple' object has no attribute 'append'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1378558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378560] [NEW] Customizing Horizon doc needs to be cleaned up

2014-10-07 Thread Cindy Lu
Public bug reported:

http://docs.openstack.org/developer/horizon/topics/customizing.html

Still refers to horizon.less file.

To add icon to Table Action, use icon property. Example:

class CreateSnapshot(tables.LinkAction):
name = “snapshot” verbose_name = _(“Create Snapshot”) icon = “camera”

This should be formatted so that each attribute is on a new line.

Possibly run through the tutorial to make sure everything works?

** Affects: horizon
 Importance: Undecided
 Assignee: Cindy Lu (clu-m)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Cindy Lu (clu-m)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1378560

Title:
  Customizing Horizon doc needs to be cleaned up

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  http://docs.openstack.org/developer/horizon/topics/customizing.html

  Still refers to horizon.less file.

  To add icon to Table Action, use icon property. Example:

  class CreateSnapshot(tables.LinkAction):
  name = “snapshot” verbose_name = _(“Create Snapshot”) icon = “camera”

  This should be formatted so that each attribute is on a new line.

  Possibly run through the tutorial to make sure everything works?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1378560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378568] [NEW] new horizon inverted tab style is very confusing and ugly

2014-10-07 Thread Walt Boring
Public bug reported:

I just noticed the new patch that landed in horizon that changes the
look/style of the tabs in horizon.

https://review.openstack.org/#/c/115649/


This new ui is confusing and very ugly.   It's very hard to distinguish now 
which tab is selected as the tab is inverted.  This is counter intuitive as 
compared to any modern UI.   

When you have 3 tabs and the middle tab is selected, it now looks like
the two outer values are selected.

** Affects: horizon
 Importance: Medium
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1378568

Title:
  new horizon inverted tab style is very confusing and ugly

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  I just noticed the new patch that landed in horizon that changes the
  look/style of the tabs in horizon.

  https://review.openstack.org/#/c/115649/

  
  This new ui is confusing and very ugly.   It's very hard to distinguish now 
which tab is selected as the tab is inverted.  This is counter intuitive as 
compared to any modern UI.   

  When you have 3 tabs and the middle tab is selected, it now looks like
  the two outer values are selected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1378568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378568] Re: new horizon inverted tab style is very confusing and ugly

2014-10-07 Thread Gary W. Smith
** Changed in: horizon
   Importance: Medium = Undecided

** Changed in: horizon
   Status: Confirmed = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1378568

Title:
  new horizon inverted tab style is very confusing and ugly

Status in OpenStack Dashboard (Horizon):
  Opinion

Bug description:
  I just noticed the new patch that landed in horizon that changes the
  look/style of the tabs in horizon.

  https://review.openstack.org/#/c/115649/

  
  This new ui is confusing and very ugly.   It's very hard to distinguish now 
which tab is selected as the tab is inverted.  This is counter intuitive as 
compared to any modern UI.   

  When you have 3 tabs and the middle tab is selected, it now looks like
  the two outer values are selected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1378568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293000] Re: Float IP DNS not handling NotImplemented Error

2014-10-07 Thread OpenStack Infra
** Changed in: nova
   Status: Invalid = In Progress

** Changed in: nova
 Assignee: (unassigned) = Haiwei Xu (xu-haiwei)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1293000

Title:
  Float IP DNS not handling NotImplemented Error

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  If the network api class do not support DNS operations a generic error is 
returned instead of NotImplemented.
  Ex:  
  nova dns-list www.google.com --name sss
  ERROR: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
req-43fce7be-5eb9-4a7f-a51c-c6473faf33de)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1293000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1217082] Re: baremetal driver needs better tests

2014-10-07 Thread Joe Gordon
we are about to delete nova-baremetal now that we are on Kilo and Ironic
merged

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1217082

Title:
  baremetal driver needs better tests

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  https://bugs.launchpad.net/tripleo/+bug/1213967 wouldn't have gotten
  through if the baremetal driver had better unit tests, so lets add
  them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1217082/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1354396] Re: updated ip_address for router interfaces not honored on nodes hosting router

2014-10-07 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1354396

Title:
  updated ip_address for router interfaces not honored on nodes hosting
  router

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  When the router  interface ip_address is updated on the controller, the 
controller shows update successful.
  However, on the nodes hosting the router, the router interfaces in the 
namespaces continue to stick to the old ip address.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1354396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp