[Yahoo-eng-team] [Bug 1299349] Re: upstream-translation-update Jenkins job failing

2014-03-28 Thread Andreas Jaeger
nova:

http://logs.openstack.org/8d/8df565d39cd3216bde8488e656957a49dd6949fa/post
/nova-upstream-translation-
update/d8f54e6/console.html#_2014-03-28_17_45_32_985

ceilometer:
http://logs.openstack.org/dc/dc72b333973eb47fc96df578bd01f59241f6a624/post/ceilometer-upstream-translation-update/14c7de2/

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: ceilometer
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1299349

Title:
  upstream-translation-update Jenkins job failing

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Compute (Nova):
  New

Bug description:
  Various upstream-translation-update succeed apparently but looking
  into the file errors get ignored - and there are errors uploading the
  files to the translation site like:

  Error uploading file: There is a syntax error in your file.
  Line 1936: duplicate message definition...
  Line 7: ...this is the location of the first definition

  
  For keystone, see:
  
http://logs.openstack.org/78/7882359da114079e8411bd3f97c5628f2cd1c098/post/keystone-upstream-translation-update/27cbb22/

  This has been fixed on ironic with:
  https://review.openstack.org/#/c/83935
  See also: 
http://lists.openstack.org/pipermail/openstack-i18n/2014-March/000479.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1299349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299349] Re: upstream-translation-update Jenkins job failing

2014-03-28 Thread Andreas Jaeger
Same problem on cinder:
http://logs.openstack.org/55/55272d4f1e2eec617d3305603d5dca5f50572e30/post
/cinder-upstream-translation-update/655775a/


** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1299349

Title:
  upstream-translation-update Jenkins job failing

Status in Cinder:
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  Various upstream-translation-update succeed apparently but looking
  into the file errors get ignored - and there are errors uploading the
  files to the translation site like:

  Error uploading file: There is a syntax error in your file.
  Line 1936: duplicate message definition...
  Line 7: ...this is the location of the first definition

  
  For keystone, see:
  
http://logs.openstack.org/78/7882359da114079e8411bd3f97c5628f2cd1c098/post/keystone-upstream-translation-update/27cbb22/

  This has been fixed on ironic with:
  https://review.openstack.org/#/c/83935
  See also: 
http://lists.openstack.org/pipermail/openstack-i18n/2014-March/000479.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1299349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299349] [NEW] upstream-translation-update Jenkins job failing

2014-03-28 Thread Andreas Jaeger
Public bug reported:

Various upstream-translation-update succeed apparently but looking into
the file errors get ignored - and there are errors uploading the files
to the translation site like:

Error uploading file: There is a syntax error in your file.
Line 1936: duplicate message definition...
Line 7: ...this is the location of the first definition


For keystone, see:
http://logs.openstack.org/78/7882359da114079e8411bd3f97c5628f2cd1c098/post/keystone-upstream-translation-update/27cbb22/

This has been fixed on ironic with:
https://review.openstack.org/#/c/83935
See also: 
http://lists.openstack.org/pipermail/openstack-i18n/2014-March/000479.html

** Affects: cinder
 Importance: Undecided
 Assignee: Andreas Jaeger (jaegerandi)
 Status: In Progress

** Affects: keystone
 Importance: Undecided
 Assignee: Andreas Jaeger (jaegerandi)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1299349

Title:
  upstream-translation-update Jenkins job failing

Status in Cinder:
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  Various upstream-translation-update succeed apparently but looking
  into the file errors get ignored - and there are errors uploading the
  files to the translation site like:

  Error uploading file: There is a syntax error in your file.
  Line 1936: duplicate message definition...
  Line 7: ...this is the location of the first definition

  
  For keystone, see:
  
http://logs.openstack.org/78/7882359da114079e8411bd3f97c5628f2cd1c098/post/keystone-upstream-translation-update/27cbb22/

  This has been fixed on ironic with:
  https://review.openstack.org/#/c/83935
  See also: 
http://lists.openstack.org/pipermail/openstack-i18n/2014-March/000479.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1299349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299333] [NEW] Unable attach/detach interface for shelve offloaded instance

2014-03-28 Thread Alex Xu
Public bug reported:

$ nova list
+--+--+++-+---+
| ID   | Name | Status | Task State | Power 
State | Networks  |
+--+--+++-+---+
| 543d9625-2523-4a7f-af47-50c8e6b44981 | vm1  | ACTIVE | -  | Running   
  | private=10.0.0.23 |
+--+--+++-+---+

$ nova shelve vm1
$ nova shelve-offload vm1
$ nova list
+--+--+---++-+---+
| ID   | Name | Status| Task State 
| Power State | Networks  |
+--+--+---++-+---+
| 543d9625-2523-4a7f-af47-50c8e6b44981 | vm1  | SHELVED_OFFLOADED | -  
| Shutdown| private=10.0.0.23 |
+--+--+---++-+---+


$ nova interface-detach vm1 bbabdfa5-7c47-4a67-b8ff-f36249fae8f1
ERROR (ClientException): The server has either erred or is incapable of 
performing the requested operation. (HTTP 500) (Request-ID: 
req-ec251c02-4adb-4c60-8e67-a998a1700ec4)


Got exception from nova-api:

2014-03-29 12:06:07.377 ERROR nova.api.openstack 
[req-ec251c02-4adb-4c60-8e67-a998a1700ec4 admin admin] Caught error: Unable to 
find host for Instance 543d9625-2523-4a7f-af47-50c8e6b44981
2014-03-29 12:06:07.377 TRACE nova.api.openstack Traceback (most recent call 
last):
2014-03-29 12:06:07.377 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/__init__.py", line 125, in __call__
2014-03-29 12:06:07.377 TRACE nova.api.openstack return 
req.get_response(self.application)
2014-03-29 12:06:07.377 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in send
2014-03-29 12:06:07.377 TRACE nova.api.openstack application, 
catch_exc_info=False)
2014-03-29 12:06:07.377 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application 
2014-03-29 12:06:07.377 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2014-03-29 12:06:07.377 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2014-03-29 12:06:07.377 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-03-29 12:06:07.377 TRACE nova.api.openstack   File 
"/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py", 
line 582, in __call__
2014-03-29 12:06:07.377 TRACE nova.api.openstack return self.app(env, 
start_response)
2014-03-29 12:06:07.377 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2014-03-29 12:06:07.377 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-03-29 12:06:07.377 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2014-03-29 12:06:07.377 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-03-29 12:06:07.377 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in __call__
2014-03-29 12:06:07.377 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2014-03-29 12:06:07.377 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2014-03-29 12:06:07.377 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-03-29 12:06:07.377 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
2014-03-29 12:06:07.377 TRACE nova.api.openstack resp = self.call_func(req, 
*args, **self.kwargs)
2014-03-29 12:06:07.377 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
2014-03-29 12:06:07.377 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
2014-03-29 12:06:07.377 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 917, in __call__
2014-03-29 12:06:07.377 TRACE nova.api.openstack content_type, body, accept)
2014-03-29 12:06:07.377 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 983, in _process_stack
2014-03-29 12:06:07.377 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
2014-03-29 12:06:07.377 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 1070, in dispatch
2014-03-29 12:06:07.377 TRACE nova.api.openstack return method(req=request, 
**action_args)
2014-03-29 12:06:07.377 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/o

[Yahoo-eng-team] [Bug 1299331] [NEW] There isn't effect when attach/detach interface for paused instance

2014-03-28 Thread Alex Xu
Public bug reported:

$ nova boot --flavor 1 --image 76ae1239-0973-44cf-9051-0e1bc8f41cdd
--nic net-id=a15cfbed-86d8-4660-9593-46447cb9464e vm1

$ nova list
+--+--+++-+---+
| ID   | Name | Status | Task State | Power 
State | Networks  |
+--+--+++-+---+
| f7e2877d-c7f5-4493-89d4-c68e9839a7ff | vm1  | ACTIVE | -  | Running   
  | private=10.0.0.22 |
+--+--+++-+---+

$ brctl show
bridge name bridge id   STP enabled interfaces
br-eth0 .fe989d8bd148   no  
br-ex   .8a1d06d8854e   no  
br-ex2  .4a98bdebe544   no  
br-int  .229ad5053a41   no  
br-tun  .2e58a2f0e047   no  
docker0 8000.   no  
lxcbr0  8000.   no  
qbr0ad6a86e-d9  8000.9e5491dd719a   no  qvb0ad6a86e-d9
tap0ad6a86e-d9


$ neutron port-list
+--+--+---++
| id   | name | mac_address   | fixed_ips   
   |
+--+--+---++
| 0ad6a86e-d967-424e-9bf5-e6821cc0cd0d |  | fa:16:3e:3a:3e:5a | 
{"subnet_id": "94575a05-796f-4ff5-b892-3c3b8231b303", "ip_address": 
"10.0.0.22"}   |
| 1e6bed8d-aece-4d3e-abcc-3ad7957d6d72 |  | fa:16:3e:9e:dc:83 | 
{"subnet_id": "e5dbc790-c26f-45b7-b2c7-574f12ad8b41", "ip_address": 
"172.24.4.12"} |
| 5f522a9a-2856-4a95-8bd8-c354c00abf0f |  | fa:16:3e:01:47:43 | 
{"subnet_id": "94575a05-796f-4ff5-b892-3c3b8231b303", "ip_address": "10.0.0.1"} 
   |
| 6226f6d3-3814-469c-bf50-8c99dfec481e |  | fa:16:3e:46:0e:35 | 
{"subnet_id": "94575a05-796f-4ff5-b892-3c3b8231b303", "ip_address": "10.0.0.2"} 
   |
| a3f2ab1c-a634-446d-8885-d7d8e5978fa1 |  | fa:16:3e:cf:02:d6 | 
{"subnet_id": "94575a05-796f-4ff5-b892-3c3b8231b303", "ip_address": 
"10.0.0.20"}   |
| c10390a9-6f84-44f5-8a17-91cb330a9e12 |  | fa:16:3e:41:7c:34 | 
{"subnet_id": "e5dbc790-c26f-45b7-b2c7-574f12ad8b41", "ip_address": 
"172.24.4.15"} |
| c814425c-be1a-4c06-a54b-1788c7c6fb31 |  | fa:16:3e:f5:fc:d3 | 
{"subnet_id": "e5dbc790-c26f-45b7-b2c7-574f12ad8b41", "ip_address": 
"172.24.4.2"}  |
| ebd874b7-43e6-4d18-b0ed-f86bb349d8b9 |  | fa:16:3e:e6:b5:09 | 
{"subnet_id": "e5dbc790-c26f-45b7-b2c7-574f12ad8b41", "ip_address": 
"172.24.4.19"} |
+--+--+---++


$ nova pause vm1

$ nova interface-detach vm1 0ad6a86e-d967-424e-9bf5-e6821cc0cd0d

$ nova list
+--+--+++-+--+
| ID   | Name | Status | Task State | Power 
State | Networks |
+--+--+++-+--+
| f7e2877d-c7f5-4493-89d4-c68e9839a7ff | vm1  | PAUSED | -  | Paused
  |  |
+--+--+++-+--+

$ brctl show
bridge name bridge id   STP enabled interfaces
br-eth0 .fe989d8bd148   no  
br-ex   .8a1d06d8854e   no  
br-ex2  .4a98bdebe544   no  
br-int  .229ad5053a41   no  
br-tun  .2e58a2f0e047   no  
docker0 8000.   no  
lxcbr0  8000.   no  


But tap still alive

$ ifconfig|grep tap0ad6a86e-d9
tap0ad6a86e-d9 Link encap:Ethernet  HWaddr fe:16:3e:3a:3e:5a

And login into instance, exec 'ifconfig', it will found the interface
still attach to the instance

** Affects: nova
 Importance: Undecided
 Assignee: Alex Xu (xuhj)
 Status: New


** Tags: libvirt

** Changed in: nova
 Assignee: (unassigned) => Alex Xu (xuhj)

** Tags added: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1299331

Title:
  There isn't effect when attach/detach interface for paused instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  $ nova boot --fl

[Yahoo-eng-team] [Bug 1299317] [NEW] use 'interface-attach' without option parameter happen ERROR

2014-03-28 Thread shihanzhang
Public bug reported:


I use 'interface-attach'  without any option parameter to add vnic to a VM, 
nova return failed, but 'nova list' will see the vnic infor add to that vm, I 
do it as follow:

root@ubuntu01:/var/log/nova# nova list
+--+--+++-++
| ID   | Name | Status | Task State | Power 
State | Networks   |
+--+--+++-++
| 663dc949-11f9-4aab-aaf7-6f5bd761ab6f | test | ACTIVE | None   | Running   
  | test=10.10.0.5 |
+--+--+++-++
root@ubuntu01:/var/log/nova# nova interface-attach test
ERROR: Failed to attach interface (HTTP 500) (Request-ID: 
req-5af0e807-521f-45a2-a329-fd61ec74779e)
root@ubuntu01:/var/log/nova# nova list
+--+--+++-++
| ID   | Name | Status | Task State | Power 
State | Networks   |
+--+--+++-++
| 663dc949-11f9-4aab-aaf7-6f5bd761ab6f | test | ACTIVE | None   | Running   
  | test=10.10.0.5, 10.10.0.5, 10.10.0.12; test2=20.20.0.2 |
+--+--+++-++

the error log in nova computer is:
 ERROR nova.openstack.common.rpc.amqp [req-5af0e807-521f-45a2-a329-fd61ec74779e 
bcac7970f8ae41f38f79e01dece39bd8 d13fb5f6d2354320bf4767f9b71df820] Exception 
during message handling
 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last):
 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line 461, 
in _process_data
 TRACE nova.openstack.common.rpc.amqp **args)
 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
 TRACE nova.openstack.common.rpc.amqp result = getattr(proxyobj, 
method)(ctxt, **kwargs)
 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3892, in 
attach_interface
 TRACE nova.openstack.common.rpc.amqp raise 
exception.InterfaceAttachFailed(instance=instance)
 TRACE nova.openstack.common.rpc.amqp InterfaceAttachFailed: Failed to attach 
network adapter device to {u'vm_state': u'active', u'availability_zone': 
u'nova', u'terminated_at': None, u'ephemeral_gb': 0, u'instance_type_id': 3, 
u'user_data': None, u'cleaned': False, u'vm_mode': None, u'deleted_at': None, 
u'reservation_id': u'r-0542q330', u'id': 1, u'security_groups': [], 
u'disable_terminate': False, u'display_name': u'test', u'uuid': 
u'663dc949-11f9-4aab-aaf7-6f5bd761ab6f', u'default_swap_device': None, 
u'info_cache': {u'instance_uuid': u'663dc949-11f9-4aab-aaf7-6f5bd761ab6f', 
u'network_info': [{u'ovs_interfaceid': u'3c959010-25c5-4fe9-91c3-fdcfff57b870', 
u'network': {u'bridge': u'br-int', u'subnets': [{u'ips': [{u'floating_ips': [], 
u'meta': {}, u'type': u'fixed', u'version': 4, u'address': u'10.10.0.5'}], 
u'version': 4, u'meta': {u'dhcp_server': u'10.10.0.3'}, u'dns': [], u'routes': 
[], u'cidr': u'10.10.0.0/24', u'gateway': {u'meta': {}, u'type': u'gateway', 
u'version': 4, u'addres
 s': u'10.10.0.1'}}], u'meta': {u'injected': False, u'tenant_id': 
u'd13fb5f6d2354320bf4767f9b71df820'}, u'id': 
u'72d612c3-24e8-4a5c-8a45-7a5afb51c2f2', u'label': u'test'}, u'devname': 
u'tap3c959010-25', u'qbh_params': None, u'meta': {}, u'address': 
u'fa:16:3e:41:06:c6', u'type': u'ovs', u'id': 
u'3c959010-25c5-4fe9-91c3-fdcfff57b870', u'qbg_params': None}]}, u'hostname': 
u'test', u'launched_on': u'ubuntu01', u'display_description': u'test', 
u'key_data': None, u'kernel_id': u'', u'power_state': 1, 
u'default_ephemeral_device': None, u'progress': 0, u'project_id': 
u'd13fb5f6d2354320bf4767f9b71df820', u'launched_at': 
u'2014-03-27T21:03:05.00', u'scheduled_at': u'2014-03-27T21:02:59.00', 
u'node': u'ubuntu01', u'ramdisk_id': u'', u'access_ip_v6': None, 
u'access_ip_v4': None, u'deleted': False, u'key_name': None, u'updated_at': 
u'2014-03-27T21:03:05.00', u'host': u'ubuntu01', u'architecture': None, 
u'user_id': u'bcac7970f8ae41f38f79e01dece39bd8', u'system_metadata': 
{u'image_min_
 disk': u'1', u'instance_type_memory_mb': u'512', u'instance_type_swap': u'0', 
u'instance_type_vcpu_weight': None, u'instance_type_root_gb': u'1', 
u'instance_type_id': u'3', u'instance_type_name': u'm1.tiny', 
u'instance_type_ephemeral_gb': u'0', u'instance_type_rxtx_factor': u'1', 
u'instance_type_flavorid': u'1', u'image_container_format': u'bare', 
u'instan

[Yahoo-eng-team] [Bug 1291676] Re: Able to create multiple volumes and volume snapshots with the same name

2014-03-28 Thread Santiago Baldassin
So even when we could add a validation in horizon, people can still use
the api to create the volumes/snapshots. Adding the validation on
horizon might result in confusing situations since the users might end
having duplicates names in the dashboard and yet beeing unable to create
resources with duplicate names

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291676

Title:
  Able to create multiple volumes and volume snapshots with the same
  name

Status in Cinder:
  Won't Fix
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Under Project -> Volumes, create a new volume, say 'vol1'.  Now create
  one or more snapshots from this volume and name them the same. Then
  create another new volume using the same name 'vol1' and create
  snapshots with the same name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1291676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281148] Re: QPID reconnection delay can't be configured

2014-03-28 Thread Adam Gandelman
** Also affects: keystone
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New => In Progress

** Changed in: keystone
Milestone: None => 2013.2.3

** Also affects: keystone/havana
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: In Progress => Invalid

** Changed in: keystone/havana
   Importance: Undecided => High

** Changed in: keystone/havana
   Status: New => In Progress

** Changed in: keystone/havana
 Assignee: (unassigned) => Flavio Percoco (flaper87)

** Changed in: cinder/havana
   Status: Fix Committed => In Progress

** Also affects: ceilometer
   Importance: Undecided
   Status: New

** Also affects: ceilometer/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1281148

Title:
  QPID reconnection delay can't  be configured

Status in OpenStack Telemetry (Ceilometer):
  New
Status in Ceilometer havana series:
  New
Status in Cinder:
  Invalid
Status in Cinder havana series:
  In Progress
Status in OpenStack Identity (Keystone):
  Invalid
Status in Keystone havana series:
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in neutron havana series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) havana series:
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Committed
Status in oslo havana series:
  Fix Committed
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  Current qpid's reconnection can get up to 60s and it's not
  configurable. This is unfortunate because 60s is quite a lot of time
  to wait for HA systems, which makes this issue a blocker for this kind
  of deployments.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1281148/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1235435] Re: 'SubnetInUse: Unable to complete operation on subnet UUID. One or more ports have an IP allocation from this subnet.'

2014-03-28 Thread Adam Gandelman
** Changed in: nova/havana
   Status: Fix Released => Fix Committed

** Changed in: nova/havana
Milestone: 2013.2.1 => 2013.2.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1235435

Title:
  'SubnetInUse: Unable to complete operation on subnet UUID. One or more
  ports have an IP allocation from this subnet.'

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Committed
Status in Tempest:
  Invalid

Bug description:
  Occasional tempest failure:

  http://logs.openstack.org/86/49086/2/gate/gate-tempest-devstack-vm-
  neutron-isolated/ce14ceb/testr_results.html.gz

  ft3.1: tearDownClass 
(tempest.scenario.test_network_basic_ops.TestNetworkBasicOps)_StringException: 
Traceback (most recent call last):
File "tempest/scenario/manager.py", line 239, in tearDownClass
  thing.delete()
File "tempest/api/network/common.py", line 71, in delete
  self.client.delete_subnet(self.id)
File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 112, in with_params
  ret = self.function(instance, *args, **kwargs)
File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 380, in delete_subnet
  return self.delete(self.subnet_path % (subnet))
File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 1233, in delete
  headers=headers, params=params)
File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 1222, in retry_request
  headers=headers, params=params)
File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 1165, in do_request
  self._handle_fault_response(status_code, replybody)
File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 1135, in _handle_fault_response
  exception_handler_v20(status_code, des_error_body)
File "/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py", 
line 97, in exception_handler_v20
  message=msg)
  NeutronClientException: 409-{u'NeutronError': {u'message': u'Unable to 
complete operation on subnet 9e820b02-bfe2-47e3-b186-21c5644bc9cf. One or more 
ports have an IP allocation from this subnet.', u'type': u'SubnetInUse', 
u'detail': u''}}

  
  logstash query:

  @message:"One or more ports have an IP allocation from this subnet"
  AND @fields.filename:"logs/screen-q-svc.txt" and @message:"
  SubnetInUse: Unable to complete operation on subnet"


  
http://logstash.openstack.org/#eyJzZWFyY2giOiJAbWVzc2FnZTpcIk9uZSBvciBtb3JlIHBvcnRzIGhhdmUgYW4gSVAgYWxsb2NhdGlvbiBmcm9tIHRoaXMgc3VibmV0XCIgQU5EIEBmaWVsZHMuZmlsZW5hbWU6XCJsb2dzL3NjcmVlbi1xLXN2Yy50eHRcIiBhbmQgQG1lc3NhZ2U6XCIgU3VibmV0SW5Vc2U6IFVuYWJsZSB0byBjb21wbGV0ZSBvcGVyYXRpb24gb24gc3VibmV0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzODA5MTY1NDUxODcsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1235435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299247] [NEW] GET details REST API next link missing 'details'

2014-03-28 Thread Steven Kaufer
Public bug reported:

When executing a pagination query a "next" link is included in the API
reply when there are more items then the specified limit.

See pagination documentation for more information:
http://docs.openstack.org/api/openstack-compute/2/content
/Paginated_Collections-d1e664.html

The caller should be able to invoke the "next" link (without having to
re-format it) in order to get the next page of data.  The documentation
states "Subsequent links will honor the initial page size. Thus, a
client may follow links to traverse a paginated collection without
having to input the marker parameter."

The problem is that the "next" link is always scoped to the non-detailed
query.  For example, if you execute
"/v2//servers/detail?limit=1", the "next" link does not have the
URL for a detailed query and is formatted as
"/v2//servers?limit=1&marker=".  In this case the "next"
link needs to be scoped to "/v2//servers/detail".

This bug is caused because the "next" link is always generated by the
_collection_name value in the ViewBuilder -- this name is always
"servers".

** Affects: cinder
 Importance: Undecided
 Assignee: Steven Kaufer (kaufer)
 Status: New

** Affects: nova
 Importance: Undecided
 Assignee: Steven Kaufer (kaufer)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Steven Kaufer (kaufer)

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) => Steven Kaufer (kaufer)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1299247

Title:
  GET details REST API next link missing 'details'

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  When executing a pagination query a "next" link is included in the API
  reply when there are more items then the specified limit.

  See pagination documentation for more information:
  http://docs.openstack.org/api/openstack-compute/2/content
  /Paginated_Collections-d1e664.html

  The caller should be able to invoke the "next" link (without having to
  re-format it) in order to get the next page of data.  The
  documentation states "Subsequent links will honor the initial page
  size. Thus, a client may follow links to traverse a paginated
  collection without having to input the marker parameter."

  The problem is that the "next" link is always scoped to the non-
  detailed query.  For example, if you execute
  "/v2//servers/detail?limit=1", the "next" link does not have
  the URL for a detailed query and is formatted as
  "/v2//servers?limit=1&marker=".  In this case the
  "next" link needs to be scoped to "/v2//servers/detail".

  This bug is caused because the "next" link is always generated by the
  _collection_name value in the ViewBuilder -- this name is always
  "servers".

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1299247/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267636] Re: Horizon will not authenticate against keystone v3

2014-03-28 Thread David Lyle
** Changed in: django-openstack-auth
   Status: Fix Committed => Fix Released

** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1267636

Title:
  Horizon will not authenticate against keystone v3

Status in Django OpenStack Auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:

  Using devstack.
  In /opt/stack/horizon/horizon/openstack_dashboard/local/local_settings.py

  Setting the following

  OPENSTACK_API_VERSIONS = {
  "identity": 3
  }

  results in an authentication failure in keystone.

  A keystone v3 endpoint is available.

  What follows are the keystone logs for the failure case:

  (eventlet.wsgi.server): 2014-01-09 14:07:04,229 INFO log write (28305)
  accepted ('127.0.0.1', 4)

  (routes.middleware): 2014-01-09 14:07:04,231 DEBUG middleware __call__ 
Matched POST /auth/tokens
  (routes.middleware): 2014-01-09 14:07:04,231 DEBUG middleware __call__ Route 
path: '{path_info:.*}', defaults: {'controller': 
}
  (routes.middleware): 2014-01-09 14:07:04,232 DEBUG middleware __call__ Match 
dict: {'controller': , 'path_info': '/auth/tokens'}
  (routes.middleware): 2014-01-09 14:07:04,232 DEBUG middleware __call__ 
Matched POST /auth/tokens
  (routes.middleware): 2014-01-09 14:07:04,232 DEBUG middleware __call__ Route 
path: '{path_info:.*}', defaults: {'controller': 
}
  (routes.middleware): 2014-01-09 14:07:04,232 DEBUG middleware __call__ Match 
dict: {'controller': , 'path_info': '/auth/tokens'}
  (routes.middleware): 2014-01-09 14:07:04,232 DEBUG middleware __call__ No 
route matched for POST /auth/tokens
  (access): 2014-01-09 14:07:04,233 INFO core __call__ 127.0.0.1 - - 
[09/Jan/2014:22:07:04 +] "POST http://127.0.0.1:5000/v2.0/auth/tokens 
HTTP/1.0" 404 93
  (eventlet.wsgi.server): 2014-01-09 14:07:04,233 INFO log write 127.0.0.1 - - 
[09/Jan/2014 14:07:04] "POST /v2.0/auth/tokens HTTP/1.1" 404 228 0.002791

  
  When using the default (v2.0) keystone (having the above code commented out), 
authentication succeeds:

  What follows are the corresponding partial  keystone logs for the
  success case:

  (eventlet.wsgi.server): 2014-01-09 14:08:41,806 INFO log write (28305)
  accepted ('127.0.0.1', 41112)

  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ 
Matched POST /tokens
  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ Route 
path: '{path_info:.*}', defaults: {'controller': 
}
  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ Match 
dict: {'controller': , 'path_info': '/tokens'}
  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ 
Matched POST /tokens
  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ Route 
path: '{path_info:.*}', defaults: {'controller': 
}
  (routes.middleware): 2014-01-09 14:08:41,808 DEBUG middleware __call__ Match 
dict: {'controller': , 'path_info': '/tokens'}
  (routes.middleware): 2014-01-09 14:08:41,808 DEBUG middleware __call__ 
Matched POST /tokens
  (routes.middleware): 2014-01-09 14:08:41,808 DEBUG middleware __call__ Route 
path: '/tokens', defaults: {'action': u'authenticate', 'controller': 
}
  (routes.middleware): 2014-01-09 14:08:41,808 DEBUG middleware __call__ Match 
dict: {'action': u'authenticate', 'controller': 
}
  (keystone.common.wsgi): 2014-01-09 14:08:41,808 DEBUG wsgi __call__ arg_dict: 
{}
  (keystone.openstack.common.versionutils): 2014-01-09 14:08:41,809 WARNING log 
deprecated Deprecated: v2 API is deprecated as of Icehouse in favor of v3 API 
and may be removed in K.
  (dogpile.core.dogpile): 2014-01-09 14:08:41,809 DEBUG dogpile _enter 
NeedRegenerationException

  Using (eventlet.wsgi.server): 2014-01-09 14:08:41,806 INFO log write
  (28305) accepted ('127.0.0.1', 41112)

  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ 
Matched POST /tokens
  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ Route 
path: '{path_info:.*}', defaults: {'controller': 
}
  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ Match 
dict: {'controller': , 'path_info': '/tokens'}
  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ 
Matched POST /tokens
  (routes.middleware): 2014-01-09 14:08:41,807 DEBUG middleware __call__ Route 
path: '{path_info:.*}', defaults: {'controller': 
}
  (routes.middleware): 2014-01-09 14:08:41,808 DEBUG middleware __call__ Match 
dict: {'controller': , 'path_info': '/tokens'}
  (routes.middleware): 2014-01-09 14:08:41,808 DEBUG middleware __call__ 
Matched POST /tokens
  (routes.middleware): 2014-01-09 14:08:41,808 DEBUG middleware __call__ Route 
path: '/tokens', defaults: {'action': u'authenticate', 'controller': 
}
  (routes.middleware): 2014-01-09 14:08:41,808 DEBUG middleware __

[Yahoo-eng-team] [Bug 1195139] Re: vmware: error trying to store hypervisor_version as string in using postgresql

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1195139

Title:
  vmware: error trying to store hypervisor_version as string in using
  postgresql

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New
Status in The OpenStack VMwareAPI subTeam:
  Fix Released

Bug description:
  Hi,

  I am trying to use VMware hypervisor ESXi 5.0.0 as a compute resource
  in OpenStack 2013.1.1. The driver being used is
  "•vmwareapi.VMwareVCDriver". The vCenter is contains a single cluster
  (two nodes) running ESXi version 5.0.

  In the compute node, I am seeing the below error

  2013-06-26 17:45:27.532 10253 AUDIT nova.compute.resource_tracker [-] Free 
ram (MB): 146933
  2013-06-26 17:45:27.532 10253 AUDIT nova.compute.resource_tracker [-] Free 
disk (GB): 55808
  2013-06-26 17:45:27.533 10253 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 24
  2013-06-26 17:45:27.533 10253 DEBUG nova.openstack.common.rpc.amqp [-] Making 
synchronous call on conductor ... multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:583
  2013-06-26 17:45:27.534 10253 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID 
is ece63342e4254910be13ee92b948ace6 multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:586
  2013-06-26 17:45:27.534 10253 DEBUG nova.openstack.common.rpc.amqp [-] 
UNIQUE_ID is c85fb461be22468b846368a6b8608ac2. _add_unique_id 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:337
  2013-06-26 17:45:27.566 10253 DEBUG nova.openstack.common.rpc.amqp [-] Making 
synchronous call on conductor ... multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:583
  2013-06-26 17:45:27.567 10253 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID 
is 3f5d065d1c8a4989b4d36415d45abe8b multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:586
  2013-06-26 17:45:27.567 10253 DEBUG nova.openstack.common.rpc.amqp [-] 
UNIQUE_ID is 1421bebfd9744271be63dcef74551c93. _add_unique_id 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:337
  2013-06-26 17:45:27.597 10253 CRITICAL nova [-] Remote error: DBError 
(DataError) invalid input syntax for integer: "5.0.0"
  LINE 1: ..., 7, 24, 147445, 55808, 0, 512, 0, 'VMware ESXi', '5.0.0', '...
   ^
   'INSERT INTO compute_nodes (created_at, updated_at, deleted_at, deleted, 
service_id, vcpus, memory_mb, local_gb, vcpus_used, memory_mb_used, 
local_gb_used, hypervisor_type, hypervisor_version, hypervisor_hostname, 
free_ram_mb, free_disk_gb, current_workload, running_vms, cpu_info, 
disk_available_least) VALUES (%(created_at)s, %(updated_at)s, %(deleted_at)s, 
%(deleted)s, %(service_id)s, %(vcpus)s, %(memory_mb)s, %(local_gb)s, 
%(vcpus_used)s, %(memory_mb_used)s, %(local_gb_used)s, %(hypervisor_type)s, 
%(hypervisor_version)s, %(hypervisor_hostname)s, %(free_ram_mb)s, 
%(free_disk_gb)s, %(current_workload)s, %(running_vms)s, %(cpu_info)s, 
%(disk_available_least)s) RETURNING compute_nodes.id' {'local_gb': 55808, 
'vcpus_used': 0, 'deleted': 0, 'hypervisor_type': u'VMware ESXi', 'created_at': 
datetime.datetime(2013, 6, 26, 12, 15, 34, 804731), 'local_gb_used': 0, 
'updated_at': None, 'hypervisor_hostname': u'10.100.10.42', 'memory_mb': 
147445, 'current_workload': 0, 'vcpus': 24, 'free_ram_mb': 146933, 
'running_vms': 0, 'free_disk_gb': 55808, 'service_id': 7, 'hypervisor_version': 
u'5.0.0', 'disk_available_least': None, 'deleted_at': None, 'cpu_info': 
u'{"model": "Intel(R) Xeon(R) CPU   L5640  @ 2.27GHz", "vendor": "HP", 
"topology": {"cores": 12, "threads": 24, "sockets": 2}}', 'memory_mb_used': 512}
  [u'Traceback (most recent call last):\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line 430, 
in _process_data\nrval = self.proxy.dispatch(ctxt, version, method, 
**args)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py", 
line 133, in dispatch\nreturn getattr(proxyobj, method)(ctxt, **kwargs)\n', 
u'  File "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 
350, in compute_node_create\nresult = self.db.compute_node_create(context, 
values)\n', u'  File "/usr/lib/python2.7/dist-packages/nova/db/api.py", line 
192, in compute_node_create\nreturn IMPL.compute_node_create(context, 
values)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 96, in 
wrapper\nreturn f(*args, **kwargs)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 498, in 
compute_node_create\ncompute_node_ref.save()\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/db/sqlalchemy/models.py",
 line 54, i

[Yahoo-eng-team] [Bug 1228847] Re: VMware: VimException: Exception in __deepcopy__ Method not found

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1228847

Title:
  VMware: VimException: Exception in __deepcopy__ Method not found

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  When an exception occurs in the VMWare driver, for example when there
  are no more IP addresses available, then the following exception is
  returned:

  2013-09-22 05:26:22.522 ERROR nova.compute.manager 
[req-b29710eb-5cb9-4de1-adca-919119b10460 demo demo] [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] Error: Exception in __deepcopy__ Method 
not found: 'VimService.VimPort.__deepcopy__'
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] Traceback (most recent call last):
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1038, in _build_instance
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] set_access_ip=set_access_ip)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File 
"/opt/stack/nova/nova/compute/claims.py", line 53, in __exit__
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] self.abort()
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File 
"/opt/stack/nova/nova/compute/claims.py", line 107, in abort
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] LOG.debug(_("Aborting claim: %s") % 
self, instance=self.instance)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File 
"/opt/stack/nova/nova/openstack/common/gettextutils.py", line 228, in __mod__
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] return copied._save_parameters(other)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File 
"/opt/stack/nova/nova/openstack/common/gettextutils.py", line 186, in 
_save_parameters
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] self.params = copy.deepcopy(other)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File "/usr/lib/python2.7/copy.py", line 
190, in deepcopy
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] y = _reconstruct(x, rv, 1, memo)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File "/usr/lib/python2.7/copy.py", line 
334, in _reconstruct
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] state = deepcopy(state, memo)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File "/usr/lib/python2.7/copy.py", line 
163, in deepcopy
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] y = copier(x, memo)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File "/usr/lib/python2.7/copy.py", line 
257, in _deepcopy_dict
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] y[deepcopy(key, memo)] = 
deepcopy(value, memo)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File "/usr/lib/python2.7/copy.py", line 
190, in deepcopy
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] y = _reconstruct(x, rv, 1, memo)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File "/usr/lib/python2.7/copy.py", line 
334, in _reconstruct
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] state = deepcopy(state, memo)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File "/usr/lib/python2.7/copy.py", line 
163, in deepcopy
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c] y = copier(x, memo)
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8a66-38d6494b9b4c]   File "/usr/lib/python2.7/copy.py", line 
257, in _deepcopy_dict
  2013-09-22 05:26:22.522 TRACE nova.compute.manager [instance: 
7f425853-63a9-4d84-8

[Yahoo-eng-team] [Bug 1241275] Re: Nova / Neutron Client failing upon re-authentication after token expiration

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1241275

Title:
  Nova / Neutron Client failing upon re-authentication after token
  expiration

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New
Status in Python client library for Neutron:
  Fix Committed

Bug description:
  By default, the token length for clients is 24 hours.  When that token
  expires (or is invalidated for any reason), nova should obtain a new
  token.

  Currently, when the token expires, it leads to the following fault:
  File "/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", 
line 136, in _get_available_networks
nets = neutron.list_networks(**search_opts).get('networks', [])
  File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", 
line 108, in with_params
ret = self.function(instance, *args, **kwargs)
  File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", 
line 325, in list_networks
**_params)
  File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", 
line 1197, in list
for r in self._pagination(collection, path, **params):
  File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", 
line 1210, in _pagination
res = self.get(path, params=params)
  File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", 
line 1183, in get
headers=headers, params=params)
  File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", 
line 1168, in retry_request
headers=headers, params=params)
  File "/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", 
line 1103, in do_request
resp, replybody = self.httpclient.do_request(action, method, body=body)
  File "/usr/lib/python2.6/site-packages/neutronclient/client.py", line 
188, in do_request
self.authenticate()
  File "/usr/lib/python2.6/site-packages/neutronclient/client.py", line 
224, in authenticate
token_url = self.auth_url + "/tokens"
  TRACE nova.openstack.common.rpc.amqp TypeError: unsupported operand 
type(s) for +: 'NoneType' and 'str'

  This error is occurring because nova/network/neutronv2/__init__.py
  obtains a token for communication with neutron.  Nova is then
  authenticating the token (nova/network/neutronv2/__init__.py -
  _get_auth_token).  Upon authentication, it passes in the token into
  the neutron client (via the _get_client method).  It should be noted
  that the token is the main element passed into the neutron client
  (auth_url, username, password, etc... are not passed in as part of the
  request)

  Since nova is passing the token directly into the neutron client, nova
  does not validate whether or not the token is authenticated.

  After the 24 hour period of time, the token naturally expires.
  Therefore, when the neutron client goes to make a request, it catches
  an exceptions.Unauthorized block.  Upon catching this exception, the
  neutron client attempts to re-authenticate and then make the request
  again.

  The issue arises in the re-authentication of the token.  The neutron client's 
authenticate method requires that the following parameters are sent in from its 
users:
   - username
   - password
   - tenant_id or tenant_name
   - auth_url
   - auth_strategy

  Since the nova client is not passing these parameters in, the neutron
  client is failing with the exception above.

  Not all methods from the nova client are exposed to this.  Invocations
  to nova/network/neutronv2/__init__.py - get_client with an 'admin'
  value set to True will always get a new token.  However, the clients
  that invoke the get_client method without specifying the admin flag,
  or by explicitly setting it to False will be affected by this.  Note
  that the admin flag IS NOT determined based off the context's admin
  attribute.

  Methods from nova/network/neutronv2/api.py that are currently affected appear 
to be:
   - _get_available_networks
   - allocate_for_instance
   - deallocate_for_instance
   - deallocate_port_for_instance
   - list_ports
   - show_port
   - add_fixed_ip_to_instance
   - remove_fixed_ip_from_instance
   - validate_networks
   - _get_instance_uuids_by_ip
   - associate_floating_ip
   - get_all
   - get
   - get_floating_ip
   - get_floating_ip_pools
   - get_floating_ip_by_address
   - get_floating_ips_by_project
   - get_instance_id_by_floating_address
   - allocate_floating_ip
   - release_floating_ip
   - disassociate_floating_ip
   - _get_subnets_from_port

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1241275/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe

[Yahoo-eng-team] [Bug 1244092] Re: db connection retrying doesn't work against db2

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244092

Title:
  db connection retrying doesn't work against db2

Status in Cinder:
  Confirmed
Status in Cinder havana series:
  Fix Committed
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  When I start Openstack following below steps, Openstack services can't be 
started without db2 connection:
  1, start openstack services;
  2, start db2 service.

  I checked codes in session.py under
  nova/openstack/common/db/sqlalchemy, the root cause is db2 connection
  error code "-30081" isn't in conn_err_codes in _is_db_connection_error
  function, connection retrying codes are skipped against db2, in order
  to enable connection retrying function against db2, we need add db2
  support in _is_db_connection_error function

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1244092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245746] Re: Grizzly to Havana Upgrade wipes out Nova quota_usages table

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1245746

Title:
  Grizzly to Havana Upgrade wipes out Nova quota_usages table

Status in OpenStack Compute (Nova):
  Incomplete
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  In grizzly, there is no user_id in quota_usages table, and the
  database with quota usages table is like this:

  mysql> select * from quota_usages;
  
+-+-+++--+---++--+---+-+
  | created_at  | updated_at  | deleted_at | id | project_id
   | resource  | in_use | reserved | until_refresh | deleted |
  
+-+-+++--+---++--+---+-+
  | 2013-10-29 03:03:05 | 2013-10-29 03:19:30 | NULL   |  1 | 
9cb04bffbe784771bd28fa093d749804 | instances |  1 |0 |  
NULL |   0 |
  | 2013-10-29 03:03:05 | 2013-10-29 03:19:30 | NULL   |  2 | 
9cb04bffbe784771bd28fa093d749804 | ram   |512 |0 |  
NULL |   0 |
  | 2013-10-29 03:03:05 | 2013-10-29 03:19:30 | NULL   |  3 | 
9cb04bffbe784771bd28fa093d749804 | cores |  1 |0 |  
NULL |   0 |
  
+-+-+++--+---++--+---+-+

  The problem can be recreated througth the following steps:

  1. In upgrade from Grizzly to Havana, migration script
  203_make_user_quotas_key_and_value.py adds 'user_id' column to
  quota_usages table and its shadow table.

  2. Migration script 216_sync_quota_usages.py willl delete all the any
  instances/cores/ram/etc quota_usages without a user_id by
  delete_null_rows. Since this is a Grizzly to Havana upgrade, and there
  is no user_id colume in Grizzly ( user_id is added by
  203_make_user_quotas_key_and_value.py in Havana), all the
  instances/cores/ram/etc resources in quota_usages will be deleted.

  3. Then  script 216_sync_quota_usages.py will try to add new
  quota_usages entrance based on a query of resources left on the table.
  Remember from step 2, they are already deleted, therefore there will
  be no quota entry inserted or updated.

  The result is the quota_usage entry from Grizzly are wiped out during
  upgrade to Havana.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1245746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1247296] Re: VMware: launching instance from instance snapshot fails

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1247296

Title:
  VMware: launching instance from instance snapshot fails

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New
Status in The OpenStack VMwareAPI subTeam:
  Fix Committed

Bug description:
  branch: stable/havana

  When using the nova VMwareVCDriver, instances launched from instance
  snapshots are spawning with ERROR.

  Steps to Reproduce:
  (using Horizon)
  1. Launch an instance using a vmdk image
  2. Take a snapshot of the instance
  3. Launch an instance using the snapshot

  Expected result: Instance should boot up successfully
  Actual result: Instance boots up with ERROR and following error is found in 
n-cpu.log:

   Traceback (most recent call last):
     File "/opt/stack/nova/nova/compute/manager.py", line 1407, in _spawn
   block_device_info)
     File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 623, in spawn
   admin_password, network_info, block_device_info)
     File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 449, in spawn
   uploaded_vmdk_path)
     File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 387, in
   self._session._wait_for_task(instance['uuid'], vmdk_copy_task)
     File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 900, in
   ret_val = done.wait()
     File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 116,
   return hubs.get_hub().switch()
     File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line
   return self.greenlet.switch()
   NovaException: A specified parameter was not correct.
   fileType

  The error seems to occur during the CopyVirtualDisk_Task operation.
  Full log for context is available here:
  http://paste.openstack.org/show/50411/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1247296/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243193] Re: VMware: snapshot backs up wrong disk when instance is attached to volume

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1243193

Title:
  VMware: snapshot backs up wrong disk when instance is attached to
  volume

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  When a volume is attached to an instance and we backup the instance or
  try to create an image from it, the volume's disk is being backed up
  and not the instance's primary disk.

  More info:
  https://communities.vmware.com/community/vmtn/openstack/blog/2013/08/28
  /introducing-vova-an-easy-way-to-try-out-openstack-on-
  vsphere#comment-29775

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1243193/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1247901] Re: Querying Windows via WMI intermittently fails in get_device_number_for_target

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1247901

Title:
  Querying Windows via WMI intermittently fails in
  get_device_number_for_target

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  The WMI query in basevolumeutils.get_device_number_for_target can
  incorrectly return no device_number during extended stress runs.  We
  saw that it would return incorrect data after about an hour.  It could
  also return an initiator_sessions object that was empty.

  By adding a check to make sure that devices wasn't empty and adding a
  retry loop in volumeops._get_mounted_disk_from_lun we could avoid
  hitting the case where it thought it couldn't get a mounted disk for
  the target_iqn.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1247901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246848] Re: VMWare: AssertionError: Trying to re-send() an already-triggered event.

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246848

Title:
  VMWare: AssertionError: Trying to re-send() an already-triggered
  event.

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  When an exceptin occurs in _wait_for_task and a failure occurs, for
  example a file is requested and it does not exists then another
  exception is also thrown:

  013-10-31 10:49:52.617 WARNING nova.virt.vmwareapi.driver [-] In 
vmwareapi:_poll_task, Got this error Trying to re-send() an already-triggered 
event.
  2013-10-31 10:49:52.618 ERROR nova.openstack.common.loopingcall [-] in fixed 
duration looping call
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall Traceback 
(most recent call last):
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall   File 
"/opt/stack/nova/nova/openstack/common/loopingcall.py", line 78, in _inner
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall 
self.f(*self.args, **self.kw)
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 941, in _poll_task
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall 
done.send_exception(excep)
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 208, in 
send_exception
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall return 
self.send(None, args)
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 150, in send
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall assert 
self._result is NOT_USED, 'Trying to re-send() an already-triggered event.'
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall 
AssertionError: Trying to re-send() an already-triggered event.
  2013-10-31 10:49:52.618 TRACE nova.openstack.common.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1246848/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1253455] Re: Compute node stats update may lead to DBDeadlock

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1253455

Title:
  Compute node stats update may lead to DBDeadlock

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  During a tempest run, when a compute node's usage stats are updated on
  the DB as part of resource claiming for an instance spawn, we hit a
  DBDeadlock exception:

  File ".../nova/compute/manager.py", line 1002, in _build_instance
   with rt.instance_claim(context, instance, limits):
  File ".../nova/openstack/common/lockutils.py", line 248, in inner
   return f(*args, **kwargs)
  File ".../nova/compute/resource_tracker.py", line 126, in instance_claim
   self._update(elevated, self.compute_node)
  File ".../nova/compute/resource_tracker.py", line 429, in _update
   context, self.compute_node, values, prune_stats)
  File ".../nova/conductor/api.py", line 240, in compute_node_update
   prune_stats)
  File ".../nova/conductor/rpcapi.py", line 363, in compute_node_update
   prune_stats=prune_stats)
  File ".../nova/rpcclient.py", line 85, in call
   return self._invoke(self.proxy.call, ctxt, method, **kwargs)
  File ".../nova/rpcclient.py", line 63, in _invoke
   return cast_or_call(ctxt, msg, **self.kwargs)
  File ".../nova/openstack/common/rpc/proxy.py", line 126, in call
   result = rpc.call(context, real_topic, msg, timeout)
  File ".../nova/openstack/common/rpc/__init__.py", line 139, in call
   return _get_impl().call(CONF, context, topic, msg, timeout)
  File ".../nova/openstack/common/rpc/impl_kombu.py", line 816, in call
   rpc_amqp.get_connection_pool(conf, Connection))
  File ".../nova/openstack/common/rpc/amqp.py", line 574, in call
   rv = list(rv)
  File ".../nova/openstack/common/rpc/amqp.py", line 539, in __iter__
   raise result
    RemoteError: Remote error: DBDeadlock (OperationalError) (1213, 'Deadlock 
found when trying to get lock; try restarting transaction') 'UPDATE 
compute_nodes SET updated_at=%s, hypervisor_version=%s WHERE compute_nodes.id = 
%s' (datetime.datetime(2013, 11, 20, 18, 28, 19, 525920), u'5.1.0 1)

  (A more complete log is at http://paste.openstack.org/raw/53702/)

  Can someone characterize the conditions under which this type of
  errors can occur?

  Perhaps sqlchemy.api.compute_node_update() needs the
  @_retry_on_deadlock treatment?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1253455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1250763] Re: Users with admin role in Nova should not re-authenticate with Neutron

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1250763

Title:
  Users with admin role in Nova should not re-authenticate with Neutron

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  A recent change to the way Nova creates a Neutron client 
https://review.openstack.org/#/c/52954/4
  changed the conditions under which it re-authenticates using the neutron 
admin credentials from
  “if admin” to “if admin or context.is_admin”.

  This means that any user with admin role in Nova now interacts with Neutron 
as a different tenant.
  Not only does this cause an unnecessary re-authentication (The user 
may/should also have an admin
  role in Neutron) it means that they can no longer  allocate and assign a 
floating IP to their instance
  via Nova (as the floating ip will now always be allocated in the context of 
neutron_admin_tenant).

  The context_is_admin part of this change should be reverted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1250763/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252849] Re: tempest fail due to neutron cache miss

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1252849

Title:
  tempest fail due to neutron cache miss

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  Sounds like a regression caused by following commits:
    1. 
https://github.com/openstack/nova/commit/1957339df302e2da75e0dbe78b5d566194ab2c08
    2. 
https://github.com/openstack/nova/commit/651fac3d5d250d42e640c3ac113084bf0d2fa3b4

  The above patches causing 2 issues:
    1. tempest.api.compute.servers.test_server_actions test fail leaving the 
servers in ERROR state.
    2. Unable to delete those servers using nova delete.

  Tempest and compute traceback here:
http://pastebin.com/CVjG03eV

  The patch was to disable cache refresh in allocate_for_instance,
  allocate_port_for_instance and deallocate_port_for_instance methods.
  This is breaking the nova boot process when the servers are created
  using the above tempest test. However, we could create servers using
  nova boot api, manually. It is likely because the cache is disabled
  while allocating the instance.

  The servers created using above test is left in ERROR state and we are
  unable to delete them. It is likely because the cache is disabled
  while deallocating the instance and/or port.

  NOTE:
  If we restore the @refresh_sync decorator against those methods and do not 
use the decorate in _get_instance_nw_info, the  the above tempest test is 
successful. I have not tested it when deleting the VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1252849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1250580] Re: Excessive calls to keystone from neutron glue code

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1250580

Title:
  Excessive calls to keystone from neutron glue code

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  This has been noticed before, it just came up again with comments in
  https://bugs.launchpad.net/neutron/+bug/1250168

  [12:23]  salv-orlando, for 1250168, where are the keystone calls being 
made from? (you indicated a large percent of api calls are keystone in the last 
comment)
  [12:35]  It looks like all calls in admin context trigger a POST 
to keystone as the admon token is not cachex
  [12:35]  Cached
  [12:36]  I think this was intentional even if I know do not 
recall the precise reason
  [12:40]  salv-mobile, which file? in python-neutronclient?
  [12:41]  I am afk i think it's either nova.network.neutronv2.api 
or nova.network.neutronv2
  [12:41]  salv-mobile, thx, will take a peek
  [12:42]  The latter would be __init_.py of course

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1250580/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255577] Re: Requests to Metadata API fail with 500 if Neutron network plugin is used

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255577

Title:
  Requests to Metadata API fail with 500 if Neutron network plugin  is
  used

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  In TripleO devtest story we are using Nova + Baremetal Driver +
  Neutron. The provisioned baremetal instance obtains its configuration
  from Metadata API. Currently all requests to Metadata API fail with
  error 500.

  In nova-api log I can see the following traceback:

  2013-11-27 11:44:01,423.423 5895 ERROR nova.api.metadata.handler 
[req-0d22f3c7-663e-452e-bfa9-747b728fc13b None None] Failed to get metadata for 
ip: 192.0.2.2
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler Traceback 
(most recent call last):
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/api/metadata/handler.py",
 line 136, in _handle_remote_ip_request
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler 
meta_data = self.get_metadata_by_remote_address(remote_address)
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/api/metadata/handler.py",
 line 78, in get_metadata_by_remote_address
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler data = 
base.get_metadata_by_address(self.conductor_api, address)
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/api/metadata/base.py",
 line 466, in get_metadata_by_address
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler fixed_ip 
= network.API().get_fixed_ip_by_address(ctxt, address)
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/network/neutronv2/api.py",
 line 680, in get_fixed_ip_by_address
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler 
uuid_maps = self._get_instance_uuids_by_ip(context, address)
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/network/neutronv2/api.py",
 line 582, in _get_instance_uuids_by_ip
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler data = 
neutronv2.get_client(context).list_ports(**search_opts)
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/network/neutronv2/__init__.py",
 line 69, in get_client
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler raise 
exceptions.Unauthorized()
  2013-11-27 11:44:01,423.423 5895 TRACE nova.api.metadata.handler 
Unauthorized: Unauthorized: bad credentials

  Analyzing this issue we found that Metadata API stopped working since
  change https://review.openstack.org/#/c/56174/4 was merged (it seems
  that change of line 57 in
  https://review.openstack.org/#/c/56174/4/nova/network/neutronv2/__init__.py
  is the reason).

  The commit message looks pretty sane and that fix seems to be the
  right thing to do, because we don't want to do neutron requests on
  behalf of neutron service user we have in nova config, but rather on
  behalf of the admin user instead who made the original request to nova
  api. So it seems that context.is_admin should be extended to make it
  possible to distinguish between those two cases of admin users: the
  real admin users, and the cases when nova api needs to talk to
  neutron.

  The problem is that all metadata queries are handled using default
  admin context (user and other vars are set to None while
  is_admin=True), so with https://review.openstack.org/#/c/56174/4
  applied, get_client() always raises an exception when Metadata API
  requests are handled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255577/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266051] Re: Attach/detach volume iSCSI multipath doesn't work properly if there are different targets associated with different portals for a mulitpath device.

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1266051

Title:
  Attach/detach volume iSCSI multipath doesn't work properly if there
  are different targets associated with different portals for a
  mulitpath device.

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  The connect_volume and disconnect_volume code in
  LibvirtISCSIVolumeDriver assumes that the targets for different
  portals are the same for the same multipath device. This is true for
  some arrays but not for others. When there are different targets
  associated with different portals for the same multipath device,
  multipath doesn't work properly during attach/detach volume
  operations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1266051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258360] Re: Remove unneeded call to conductor in network interface

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258360

Title:
  Remove unneeded call to conductor in network interface

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  Remove unneeded call to conductor in network interface

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270008] Re: periodic tasks will be invalid if a qemu process becomes to defunct status

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270008

Title:
  periodic tasks will be invalid if a qemu process becomes to defunct
  status

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  I am using stable havana nova.
  I got this exception while I delete my kvm instance, but the qemu process of 
this instance become to 'defunct' status by some unknown reason(may be a 
qemu/kvm bug), and then the periodic task stopped unexpectly everytime, then 
the resources of this compute node will never be reported, because of this 
exception below, I think we should handle this exception while running periodic 
task.
  2014-01-16 15:53:28.421 47954 ERROR nova.openstack.common.periodic_task [-] 
Error during ComputeManager.update_available_resource: cannot get CPU affinity 
of process 62279: No such process
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
Traceback (most recent call last):
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/periodic_task.py", 
line 180, in run_periodic_tasks
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
task(self, context)
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5617, in 
update_available_resource
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
rt.update_available_resource(context)
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", 
line 246, in inner
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
return f(*args, **kwargs)
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 
281, in update_available_resource
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
resources = self.driver.get_available_resource(self.nodename)
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4275, 
in get_available_resource
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
stats = self.host_state.get_host_stats(refresh=True)
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5350, 
in get_host_stats
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
self.update_status()
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5386, 
in update_status
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
data["vcpus_used"] = self.driver.get_vcpu_used()
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3949, 
in get_vcpu_used
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
vcpus = dom.vcpus()
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 179, in doit
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
result = proxy_call(self._autowrap, f, *args, **kwargs)
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 139, in 
proxy_call
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
rv = execute(f,*args,**kwargs)
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 77, in tworker
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
rv = meth(*args,**kwargs)
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/libvirt.py", line , in vcpus
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
if ret == -1: raise libvirtError ('virDomainGetVcpus() failed', dom=self)
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task 
libvirtError: cannot get CPU affinity of process 62279: No such process
  2014-01-16 15:53:28.421 47954 TRACE nova.openstack.common.periodic_task

  and the exception while I delete this instance:
  2014-01-16 15:13:26.640 47954 ERROR

[Yahoo-eng-team] [Bug 1255609] Re: VMware: possible collision of VNC ports

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255609

Title:
  VMware: possible collision of VNC ports

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  We assign VNC ports to VM instances with the following method:

  def _get_vnc_port(vm_ref):
  """Return VNC port for an VM."""
  vm_id = int(vm_ref.value.replace('vm-', ''))
  port = CONF.vmware.vnc_port + vm_id % CONF.vmware.vnc_port_total
  return port

  the vm_id is a simple counter in vSphere which increments fast and
  there is a chance to get the same port number if the vm_ids are equal
  modulo vnc_port_total (1 by default).

  A report was received that if the port number is reused you may get
  access to the VNC console of another tenant. We need to fix the
  implementation to always choose a port number which is not taken or
  report an error if there are no free ports available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255609/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267097] Re: A glance error (image unavailable) during a snapshot leaves an instance in SNAPSHOTTING

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1267097

Title:
  A glance error (image unavailable) during a snapshot leaves an
  instance in SNAPSHOTTING

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  If a snapshot is attempted to be taken when Glance can't find the
  underlying image (usually because it's been deleted), then
  ImageNotFound is raised to
  nova.compute.manager.ComputeManager._snapshot_instance. The error path
  for this leaves the instance in a snapshotting state; after which
  "operator intervention" is required.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1267097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260697] Re: network_device_mtu configuration does not apply on LibvirtHybridOVSBridgeDriver to OVS VIF ports and VETH pairs

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260697

Title:
  network_device_mtu configuration does not apply on
  LibvirtHybridOVSBridgeDriver to OVS VIF ports and VETH pairs

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  Due to this missing functionality MTU can not be increased/adapted to 
specific requirements.
  For example configure compute node VIFs to make use of jumbo frames.

  LinuxOVSInterfaceDriver and LinuxBridgeInterfaceDriver are using
  network_device_mtu to configure the MTU of some created VIFs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276510] Re: MySQL 2013 lost connection is being raised

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1276510

Title:
  MySQL 2013 lost connection is being raised

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  In Progress
Status in Cinder havana series:
  Fix Committed
Status in Ironic (Bare Metal Provisioning):
  Fix Released
Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) havana series:
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in oslo havana series:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in Tuskar:
  Fix Released

Bug description:
  MySQL's 2013 code error is not in the list of connection lost issues.
  This causes the reconnect loop to raise this error and stop retrying.

  [database]
  max_retries = -1
  retry_interval = 1

  mysql down:

  ==> scheduler.log <==
  2014-02-03 16:51:50.956 16184 CRITICAL cinder [-] (OperationalError) (2013, 
"Lost connection to MySQL server at 'reading initial communication packet', 
system error: 0") None None

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1276510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280140] Re: cleanup_running_deleted_instances peroidic task failed with instance not found

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280140

Title:
  cleanup_running_deleted_instances peroidic task failed with instance
  not found

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  this is because the db  query is not including the deleted instance while
  _delete_instance_files() in libvirt driver.

  I can reproduce this bug both in master and stable havana.
  reproduce steps:
  1. create an instance
  2. stop nova-compute
  3. wait for nova-manage serivce list display xxx of nova-compute
  4. modify the running_deleted_instance_poll_interval=60
  running_deleted_instance_action = reap,
  and start nova-compute and wait for this clean up peroidic task
  5. a warnning will be given in the compute log:
  2014-02-14 16:22:25.917 WARNING nova.compute.manager [-] [instance: 
c32db267-21a0-41e7-9d50-931d8396d8cb] Periodic cleanup failed to delete 
instance: Instance c32db267-21a0-41e7-9d50-931d8396d8cb could not be found.

  the debug trace is:
  ipdb> n
  > /opt/stack/nova/nova/virt/libvirt/driver.py(1006)cleanup()
 1005 block_device_mapping = driver.block_device_info_get_mapping(
  -> 1006 block_device_info)
 1007 for vol in block_device_mapping:

  ipdb> n
  > /opt/stack/nova/nova/virt/libvirt/driver.py(1007)cleanup()
 1006 block_device_info)
  -> 1007 for vol in block_device_mapping:
 1008 connection_info = vol['connection_info']

  ipdb> n
  > /opt/stack/nova/nova/virt/libvirt/driver.py(1041)cleanup()
 1040 
  -> 1041 if destroy_disks:
 1042 self._delete_instance_files(instance)

  ipdb> n
  > /opt/stack/nova/nova/virt/libvirt/driver.py(1042)cleanup()
 1041 if destroy_disks:
  -> 1042 self._delete_instance_files(instance)
 1043 

  ipdb> s
  --Call--
  > /opt/stack/nova/nova/virt/libvirt/driver.py(4950)_delete_instance_files()
 4949 
  -> 4950 def _delete_instance_files(self, instance):
 4951 # NOTE(mikal): a shim to handle this file not using instance objects

  
  ipdb> n
  > /opt/stack/nova/nova/virt/libvirt/driver.py(4953)_delete_instance_files()
 4952 # everywhere. Remove this when that conversion happens.

  -> 4953 context = nova_context.get_admin_context()
 4954 inst_obj = instance_obj.Instance.get_by_uuid(context, 
instance['uuid'])

  ipdb> n
  > /opt/stack/nova/nova/virt/libvirt/driver.py(4954)_delete_instance_files()
 4953 context = nova_context.get_admin_context()
  -> 4954 inst_obj = instance_obj.Instance.get_by_uuid(context, 
instance['uuid'])
 4955 

  ipdb> n
  InstanceNotFound: Instance...found.',)
  > /opt/stack/nova/nova/virt/libvirt/driver.py(4954)_delete_instance_files()
 4953 context = nova_context.get_admin_context()
  -> 4954 inst_obj = instance_obj.Instance.get_by_uuid(context, 
instance['uuid'])
 4955 

  ipdb> n
  --Return--
  None
  > /opt/stack/nova/nova/virt/libvirt/driver.py(4954)_delete_instance_files()
 4953 context = nova_context.get_admin_context()
  -> 4954 inst_obj = instance_obj.Instance.get_by_uuid(context, 
instance['uuid'])
 4955 

  ipdb> n
  InstanceNotFound: Instance...found.',)
  > /opt/stack/nova/nova/virt/libvirt/driver.py(1042)cleanup()
 1041 if destroy_disks:
  -> 1042 self._delete_instance_files(instance)
 1043 

  ipdb> n
  --Return--
  None
  > /opt/stack/nova/nova/virt/libvirt/driver.py(1042)cleanup()
 1041 if destroy_disks:
  -> 1042 self._delete_instance_files(instance)
 1043 

  ipdb> n
  InstanceNotFound: Instance...found.',)
  > /opt/stack/nova/nova/virt/libvirt/driver.py(931)destroy()
  930 self.cleanup(context, instance, network_info, block_device_info,
  --> 931 destroy_disks)
  932 

  ipdb> n
  --Return--
  None
  > /opt/stack/nova/nova/virt/libvirt/driver.py(931)destroy()
  930 self.cleanup(context, instance, network_info, block_device_info,
  --> 931 destroy_disks)
  932 

  ipdb> n
  InstanceNotFound: Instance...found.',)
  > /opt/stack/nova/nova/compute/manager.py(1905)_shutdown_instance()
 1904 self.driver.destroy(context, instance, network_info,
  -> 1905 block_device_info)
 1906 except exception.InstancePowerOffFailure:

  ipdb> n
  > /opt/stack/nova/nova/compute/manager.py(1906)_shutdown_instance()
 1905 block_device_info)
  -> 1906 except exception.InstancePowerOffFailure:
 1907 # if the instance can't power off, don't release the ip

  
  ipdb> n
  > /opt/stack/nova/nova/compute/manager.py(1910)_shutdown_instance()
 1909 pass
  -> 1910 except Exception:
 1911 with excutils.save_and_reraise_exception():

  ipdb> n
  > /opt/stack/nova/nova/compute/manager.py(1911)_shutdown_instance()
 1910 except Exception:
  -> 1911 with excutils.save_and_reraise_exception():
 1912 # deallocate ip and fail without proceeding to


[Yahoo-eng-team] [Bug 1270304] Re: error in guestfs driver setup causes orphaned libguestfs processes

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270304

Title:
  error in guestfs driver setup causes orphaned libguestfs processes

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  In the libguestfs driver for Nova VFS, certain errors in the setup
  method can cause a libguestfs process to remain running, even after
  the VM is terminated and/or the method that spawned the libguestfs VM
  has finished.  These processes become zombies when killed, and can
  only be destroyed by restarting nova-compute.

  In the particular issue encountered, when using gluster as a cinder
  backend and attempting to launch a VM before adding a keypair, the
  error would occur.  However, it is feasible that the error could occur
  elsewhere.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270304/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274341] Re: reservation_commit can lead to deadlock

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274341

Title:
  reservation_commit can lead to deadlock

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  reservation_commit can deadlock under load:

  2014-01-29 18:51:00.470 ERROR nova.quota 
[req-659db2df-6124-4e6a-b86b-700aae1de805 183137d95c584efb84e773a21f2ef7a1 
d8d5b06deffc45e7b258eb65ea04017c] Failed to commit reservations 
['5bea6421-1648-4fe1-9d21-4232f536e031', 
'8b27edda-f40e-476c-a9b3-d007aa3f6aac', '58e357df-b45d-4008-9ef3-0da3d8daebbb']
  2014-01-29 18:51:00.470 4380 TRACE nova.quota Traceback (most recent call 
last):
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
"/usr/lib/python2.7/dist-packages/nova/quota.py", line 982, in commit
  2014-01-29 18:51:00.470 4380 TRACE nova.quota 
self._driver.commit(context, reservations, project_id=project_id)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
"/usr/lib/python2.7/dist-packages/nova/quota.py", line 370, in commit
  2014-01-29 18:51:00.470 4380 TRACE nova.quota 
db.reservation_commit(context, reservations, project_id=project_id)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
"/usr/lib/python2.7/dist-packages/nova/db/api.py", line 970, in 
reservation_commit
  2014-01-29 18:51:00.470 4380 TRACE nova.quota project_id=project_id)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 114, in 
wrapper
  2014-01-29 18:51:00.470 4380 TRACE nova.quota return f(*args, **kwargs)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 2786, in 
reservation_commit
  2014-01-29 18:51:00.470 4380 TRACE nova.quota for reservation in 
reservation_query.all():
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2115, in all
  2014-01-29 18:51:00.470 4380 TRACE nova.quota return list(self)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2227, in 
__iter__
  2014-01-29 18:51:00.470 4380 TRACE nova.quota return 
self._execute_and_instances(context)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2242, in 
_execute_and_instances
  2014-01-29 18:51:00.470 4380 TRACE nova.quota result = 
conn.execute(querycontext.statement, self._params)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1449, in 
execute
  2014-01-29 18:51:00.470 4380 TRACE nova.quota params)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1584, in 
_execute_clauseelement
  2014-01-29 18:51:00.470 4380 TRACE nova.quota compiled_sql, 
distilled_params
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1698, in 
_execute_context
  2014-01-29 18:51:00.470 4380 TRACE nova.quota context)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1691, in 
_execute_context
  2014-01-29 18:51:00.470 4380 TRACE nova.quota context)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 331, in 
do_execute
  2014-01-29 18:51:00.470 4380 TRACE nova.quota cursor.execute(statement, 
parameters)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
"/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 174, in execute
  2014-01-29 18:51:00.470 4380 TRACE nova.quota self.errorhandler(self, 
exc, value)
  2014-01-29 18:51:00.470 4380 TRACE nova.quota   File 
"/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in 
defaulterrorhandler
  2014-01-29 18:51:00.470 4380 TRACE nova.quota raise errorclass, errorvalue
  2014-01-29 18:51:00.470 4380 TRACE nova.quota OperationalError: 
(OperationalError) (1213, 'Deadlock found when trying to get lock; try 
restarting transaction') 'SELECT reservations.created_at AS 
reservations_created_at, reservations.updated_at AS reservations_updated_at, 
reservations.deleted_at AS reservations_deleted_at, reservations.deleted AS 
reservations_deleted, reservations.id AS reservations_id, reservations.uuid AS 
reservations_uuid, reservations.usage_id AS reservations_usage_id, 
reservations.project_id AS reservations_project_id, reservations.resource AS 
reservations_resource, reservations.delta AS reservations_delta, 
reservations.expire AS reservations_expire \nFROM r

[Yahoo-eng-team] [Bug 1288463] Re: neutron_metadata_proxy_shared_secret should not be written to log file

2014-03-28 Thread Adam Gandelman
** Also affects: nova/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288463

Title:
  neutron_metadata_proxy_shared_secret should not be written to log file

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  neutron_metadata_proxy_shared_secret should not be written to log file

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1288463/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269722] Re: Help message of flag 'enable_isolated_metadata' is not correct

2014-03-28 Thread Adam Gandelman
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1269722

Title:
  Help message of flag 'enable_isolated_metadata' is not correct

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  New

Bug description:
  To provide metadata service on isolated subnet, the DHCP agent send a static 
route to the guest to be able to join the metadata IP. This functionality was 
available through the flag 'enable_isolated_metadata'.
  Thanks to the patch [1], now the DHCP agent determines a subnet is isolated 
if it doesn't contain router port.
  The help message of the flag wasn't updated accordingly to this evolution.

  [1]
  
https://github.com/openstack/neutron/commit/c73b54e50b62c489f04432bdbc5bee678b18226e

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1269722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1248002] Re: bandwidth metering - excluded option doesn't work

2014-03-28 Thread Adam Gandelman
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1248002

Title:
  bandwidth metering - excluded option doesn't work

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  New

Bug description:
  When I using 'meter-label-rule-create' command with neutronclient, excluded 
option is doesn't work.
  The option exclude this cidr from the label.
  You have to fix below code:
  neutron / neutron / services / metering / drivers / iptables / 
iptables_driver.py
  (line number is 157)

  def _process_metering_label_rules(self, rm, rules, label_chain,
rules_chain):
  im = rm.iptables_manager
  ext_dev = self.get_external_device_name(rm.router['gw_port_id'])
  if not ext_dev:
  return

  for rule in rules:
  remote_ip = rule['remote_ip_prefix']

  dir = '-i ' + ext_dev
  if rule['direction'] == 'egress':
  dir = '-o ' + ext_dev

  if rule['excluded'] == 'true':   --> fix it : True (boolean 
type)
  ipt_rule = dir + ' -d ' + remote_ip + ' -j RETURN'
  im.ipv4['filter'].add_rule(rules_chain, ipt_rule, wrap=False,
 top=True)
  else:
  ipt_rule = dir + ' -d ' + remote_ip + ' -j ' + label_chain
  im.ipv4['filter'].add_rule(rules_chain, ipt_rule,
 wrap=False, top=False)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1248002/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1256041] Re: Typo in the metering agent when there is an exception when it invokes a driver method

2014-03-28 Thread Adam Gandelman
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1256041

Title:
  Typo in the metering agent when there is an exception when it invokes
  a driver method

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  New

Bug description:
  Typo in the metering agent when there is an exception when it invokes
  a driver method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1256041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257119] Re: PLUMgrid plugin is missing id in update_floatingip

2014-03-28 Thread Adam Gandelman
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257119

Title:
  PLUMgrid plugin is missing id in update_floatingip

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  New

Bug description:
  In PLUMgrid plugin, the update_floatingip is missing id as shown
  above:

  def update_floatingip(self, net_db, floating_ip):
  self.plumlib.update_floatingip(net_db, floating_ip, id)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263766] Re: Hyper-V agent fails when ceilometer ACLs are already applied

2014-03-28 Thread Adam Gandelman
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1263766

Title:
  Hyper-V agent fails when ceilometer ACLs are already applied

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in neutron havana series:
  New

Bug description:
  The Hyper-V agent fails when the restarted after the ceilometer ACLs
  have been already applied.

  Stack trace: http://pastebin.com/xnuszXid

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1263766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263881] Re: ML2 l2-pop Mechanism driver fails when more than one port is added/deleted simultaneously

2014-03-28 Thread Adam Gandelman
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1263881

Title:
  ML2 l2-pop Mechanism driver fails when more than one port is
  added/deleted simultaneously

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  New

Bug description:
  If I create more than one VM at a time (with nova boot option --num-
  instances), sometimes the flooding flow for broadcast, multicast and
  unknown unicast are not added on certain l2 agents.

  And conversely, when I delete more than one VM at a time, the flooding
  rule are not purge on certain l2 agents when it's necessary.

  I made this test on the trunk version with OVS agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1263881/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268460] Re: PLUMgrid plugin should report proper error messages

2014-03-28 Thread Adam Gandelman
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1268460

Title:
  PLUMgrid plugin should report proper error messages

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  New

Bug description:
  PLUMgrid Director error messages should be reported at plugin level as
  well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1268460/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276371] Re: Flavor Access settings lost between edits

2014-03-28 Thread Adam Gandelman
** Also affects: horizon/havana
   Importance: Undecided
   Status: New

** Changed in: horizon/havana
   Status: New => Fix Committed

** Changed in: horizon/havana
Milestone: None => 2013.2.3

** Changed in: horizon/havana
 Assignee: (unassigned) => Nicolas Simonds (nicolas.simonds)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1276371

Title:
  Flavor Access settings lost between edits

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Committed

Bug description:
  Public / Flavor Access controls seem to be bugged.

  To reproduce:

  # Create a flavor, ignore the "Flavor Access" tab
  # Watch the flavor get created with "Public" access
  # Click "Edit", go to "Flavor Access", set it to a project or two
  # Click Save
  # Watch the access get switched to "Not Public"
  # Click "Edit", go back to "Flavor Access"
  # See that all flavors have disappeared
  # Click "Save", and watch the access go back to "Public"

  Expected Behaviour:

  Flavor Access / Public settings are preserved between edits

  Actual Behaviour:

  All Flavor Access settings are lost between edits, and unless
  explicitly preserved, an edit can inadvertently make a private flavor
  public.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1276371/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279497] Re: NSX plugin: punctual state synchronization might cause timeouts

2014-03-28 Thread Adam Gandelman
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1279497

Title:
  NSX plugin: punctual state synchronization might cause timeouts

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  New

Bug description:
  When performing punctual state synchronization (*) the NSX plugin
  might trigger the infamous eventlet-mysql deadlock since a NSX API
  call is performed from within a DB transaction.

  This should be avoided, especially since the transaction is not needed
  in this case.

  
  (*) This can be enabled either via a configuration variable or by explicitly 
selecting the 'status' field in Neutron APIs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1279497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276510] Re: MySQL 2013 lost connection is being raised

2014-03-28 Thread Dolph Mathews
This appears to have been fixed in 96be7449 as part of a larger sync
with oslo.db

  https://review.openstack.org/#/c/71311/

** Changed in: keystone
   Status: In Progress => Invalid

** Changed in: keystone
 Assignee: Xurong Yang (idopra) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1276510

Title:
  MySQL 2013 lost connection is being raised

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  In Progress
Status in Cinder havana series:
  Fix Committed
Status in Ironic (Bare Metal Provisioning):
  Fix Released
Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in oslo havana series:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in Tuskar:
  Fix Released

Bug description:
  MySQL's 2013 code error is not in the list of connection lost issues.
  This causes the reconnect loop to raise this error and stop retrying.

  [database]
  max_retries = -1
  retry_interval = 1

  mysql down:

  ==> scheduler.log <==
  2014-02-03 16:51:50.956 16184 CRITICAL cinder [-] (OperationalError) (2013, 
"Lost connection to MySQL server at 'reading initial communication packet', 
system error: 0") None None

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1276510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281772] Re: NSX: sync do not pass around model object

2014-03-28 Thread Adam Gandelman
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1281772

Title:
  NSX: sync do not pass around model object

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  New
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  The NSX sync backend previously passed around a sqlalchemy model object
  around which was nice because we did not need to query the database an
  additional time to update the status of an object. Unfortinately, this
  was add done within a db transaction which included a call to NSX which 
  could cause deadlock this needed to be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1281772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282452] Re: gate-neutron-python27 fails: "greenlet.GreenletExit" and then "WARNING: Unable to find to confirm results!"

2014-03-28 Thread Adam Gandelman
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1282452

Title:
  gate-neutron-python27 fails: "greenlet.GreenletExit" and then
  "WARNING: Unable to find  to confirm results!"

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  New
Status in OpenStack Core Infrastructure:
  Invalid

Bug description:
  gate-neutron-python27 fails with "WARNING: Unable to find  to confirm 
results!".
  The original cause is "greenlet.GreenletExit" and testr aborted.

  message:"greenlet.GreenletExit" AND filename:console.html
  87 hits in this 48 hours. (84 are in gate-neutron-python27 and 3 are in 
gate-oslo-incubator-python27)

  I am not sure it is a bug of neutron or CI infra, but the hit rate
  above says it is only related to neutron python 2.7.

  logstash:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiZ3JlZW5sZXQuR3JlZW5sZXRFeGl0XCIgQU5EIGZpbGVuYW1lOmNvbnNvbGUuaHRtbCIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiY3VzdG9tIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7ImZyb20iOiIyMDE0LTAyLTE4VDA4OjA5OjQ3KzAwOjAwIiwidG8iOiIyMDE0LTAyLTIwVDA4OjA5OjQ3KzAwOjAwIiwidXNlcl9pbnRlcnZhbCI6IjAifSwic3RhbXAiOjEzOTI4ODU1NDA1MjAsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1282452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1190149] Re: Token auth fails when token is larger than 8k

2014-03-28 Thread Adam Gandelman
** Changed in: glance/havana
   Importance: Medium => Undecided

** Also affects: heat/havana
   Importance: Undecided
   Status: New

** Changed in: heat/havana
   Importance: Undecided => Critical

** Changed in: heat/havana
   Status: New => Fix Committed

** Changed in: heat/havana
Milestone: None => 2013.2.3

** Changed in: heat/havana
   Status: Fix Committed => New

** Changed in: heat/havana
Milestone: 2013.2.3 => None

** Changed in: heat/havana
 Assignee: (unassigned) => Zhang Yang (neil-zhangyang)

** Changed in: heat/havana
Milestone: None => 2013.2.3

** Changed in: heat/havana
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1190149

Title:
  Token auth fails when token is larger than 8k

Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Committed
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance havana series:
  Fix Committed
Status in Orchestration API (Heat):
  Fix Released
Status in heat havana series:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Object Storage (Swift):
  Fix Released
Status in Trove - Database as a Service:
  Fix Released

Bug description:
  The following tests fail when there are 8 or more endpoints registered with 
keystone 
  tempest.api.compute.test_auth_token.AuthTokenTestJSON.test_v3_token 
  tempest.api.compute.test_auth_token.AuthTokenTestXML.test_v3_token

  Steps to reproduce:
  - run devstack with the following services (the heat h-* apis push the 
endpoint count over the threshold

ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-sch,horizon,mysql,rabbit,sysstat,tempest,s-proxy,s-account,s-container,s-object,cinder,c-api,c-vol,c-sch,n-cond,heat,h-api,h-api-cfn,h-api-cw,h-eng,n-net
  - run the failing tempest tests, eg
testr run test_v3_token
  - results in the following errors:
  ERROR: tempest.api.compute.test_auth_token.AuthTokenTestJSON.test_v3_token
  tags: worker-0
  --
  Traceback (most recent call last):
File "tempest/api/compute/test_auth_token.py", line 48, in test_v3_token
  self.servers_v3.list_servers()
File "tempest/services/compute/json/servers_client.py", line 138, in 
list_servers
  resp, body = self.get(url)
File "tempest/common/rest_client.py", line 269, in get
  return self.request('GET', url, headers)
File "tempest/common/rest_client.py", line 394, in request
  resp, resp_body)
File "tempest/common/rest_client.py", line 443, in _error_checker
  resp_body = self._parse_resp(resp_body)
File "tempest/common/rest_client.py", line 327, in _parse_resp
  return json.loads(body)
File "/usr/lib64/python2.7/json/__init__.py", line 326, in loads
  return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
  obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 384, in raw_decode
  raise ValueError("No JSON object could be decoded")
  ValueError: No JSON object could be decoded
  ==
  ERROR: tempest.api.compute.test_auth_token.AuthTokenTestXML.test_v3_token
  tags: worker-0
  --
  Traceback (most recent call last):
File "tempest/api/compute/test_auth_token.py", line 48, in test_v3_token
  self.servers_v3.list_servers()
File "tempest/services/compute/xml/servers_client.py", line 181, in 
list_servers
  resp, body = self.get(url, self.headers)
File "tempest/common/rest_client.py", line 269, in get
  return self.request('GET', url, headers)
File "tempest/common/rest_client.py", line 394, in request
  resp, resp_body)
File "tempest/common/rest_client.py", line 443, in _error_checker
  resp_body = self._parse_resp(resp_body)
File "tempest/common/rest_client.py", line 519, in _parse_resp
  return xml_to_json(etree.fromstring(body))
File "lxml.etree.pyx", line 2993, in lxml.etree.fromstring 
(src/lxml/lxml.etree.c:63285)
File "parser.pxi", line 1617, in lxml.etree._parseMemoryDocument 
(src/lxml/lxml.etree.c:93571)
File "parser.pxi", line 1495, in lxml.etree._parseDoc 
(src/lxml/lxml.etree.c:92370)
File "parser.pxi", line 1011, in lxml.etree._BaseParser._parseDoc 
(src/lxml/lxml.etree.c:89010)
File "parser.pxi", line 577, in 
lxml.etree._ParserContext._handleParseResultDoc (src/lxml/lxml.etree.c:84711)
File "parser.pxi", line 676, in lxml.etree._handleParseResult 
(src/lxml/lxml.etree.c:85816)
File "parser.px

[Yahoo-eng-team] [Bug 1244092] Re: db connection retrying doesn't work against db2

2014-03-28 Thread Adam Gandelman
** Also affects: cinder/havana
   Importance: Undecided
   Status: New

** Changed in: cinder/havana
   Importance: Undecided => Low

** Changed in: cinder/havana
   Status: New => Fix Committed

** Changed in: cinder/havana
Milestone: None => 2013.2.3

** Changed in: cinder/havana
 Assignee: (unassigned) => Flavio Percoco (flaper87)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244092

Title:
  db connection retrying doesn't work against db2

Status in Cinder:
  Confirmed
Status in Cinder havana series:
  Fix Committed
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  When I start Openstack following below steps, Openstack services can't be 
started without db2 connection:
  1, start openstack services;
  2, start db2 service.

  I checked codes in session.py under
  nova/openstack/common/db/sqlalchemy, the root cause is db2 connection
  error code "-30081" isn't in conn_err_codes in _is_db_connection_error
  function, connection retrying codes are skipped against db2, in order
  to enable connection retrying function against db2, we need add db2
  support in _is_db_connection_error function

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1244092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245947] Re: [DB2] Glance image-list command failed when there are more than DEFAULT_PAGE_SIZE images

2014-03-28 Thread Adam Gandelman
** Also affects: glance/havana
   Importance: Undecided
   Status: New

** Changed in: glance/havana
   Importance: Undecided => Medium

** Changed in: glance/havana
   Status: New => Fix Committed

** Changed in: glance/havana
Milestone: None => 2013.2.3

** Changed in: glance/havana
 Assignee: (unassigned) => Dong Liu (liudong78)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1245947

Title:
  [DB2] Glance image-list command failed when there are more than
  DEFAULT_PAGE_SIZE images

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance havana series:
  Fix Committed

Bug description:
  Glance image-list command will fail when there are more than
  DEFAULT_PAGE_SIZE images and user will run into below error with cli.

  ERROR: The server has either erred or is incapable of performing the
  requested operation. (HTTP 500) (Request-ID: req-
  30c23185-8a84-4321-8fd0-155b25904405)

  It only happens on db2 backend after applying the patch
  https://review.openstack.org/#/c/35178/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1245947/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268966] Re: glance requires pyOpenSSL>=0.11

2014-03-28 Thread Adam Gandelman
** Also affects: glance/havana
   Importance: Undecided
   Status: New

** Changed in: glance/havana
   Importance: Undecided => High

** Changed in: glance/havana
   Status: New => Fix Committed

** Changed in: glance/havana
Milestone: None => 2013.2.3

** Changed in: glance/havana
 Assignee: (unassigned) => Dmitry Kulishenko (dmitryk-g)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1268966

Title:
  glance requires pyOpenSSL>=0.11

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance havana series:
  Fix Committed

Bug description:
  while running unit tests with pyOpenSSL==0.10 I have got an exception
  _StringException: Traceback (most recent call last):
File 
"/usr/lib/python2.6/site-packages/glance/tests/unit/common/test_utils.py", line 
197, in test_validate_key_cert_key
  utils.validate_key_cert(keyfile, certfile)
File "/usr/lib/python2.6/site-packages/glance/common/utils.py", line 507, 
in validate_key_cert
  out = crypto.sign(key, data, digest)
  AttributeError: 'module' object has no attribute 'sign'

  
  Since OpenSSL.crypto.sign() is new in version 0.11, we have to update 
pyOpenSSL  requirement to pyOpenSSL>=0.11

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1268966/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276510] Re: MySQL 2013 lost connection is being raised

2014-03-28 Thread Adam Gandelman
** Also affects: cinder/havana
   Importance: Undecided
   Status: New

** Changed in: cinder/havana
   Importance: Undecided => High

** Changed in: cinder/havana
   Status: New => Fix Committed

** Changed in: cinder/havana
Milestone: None => 2013.2.3

** Changed in: cinder/havana
 Assignee: (unassigned) => Flavio Percoco (flaper87)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1276510

Title:
  MySQL 2013 lost connection is being raised

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  In Progress
Status in Cinder havana series:
  Fix Committed
Status in Ironic (Bare Metal Provisioning):
  Fix Released
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in oslo havana series:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in Tuskar:
  Fix Released

Bug description:
  MySQL's 2013 code error is not in the list of connection lost issues.
  This causes the reconnect loop to raise this error and stop retrying.

  [database]
  max_retries = -1
  retry_interval = 1

  mysql down:

  ==> scheduler.log <==
  2014-02-03 16:51:50.956 16184 CRITICAL cinder [-] (OperationalError) (2013, 
"Lost connection to MySQL server at 'reading initial communication packet', 
system error: 0") None None

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1276510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299188] [NEW] qunit and jasmine urlpatterns should be defined only for test

2014-03-28 Thread Akihiro Motoki
Public bug reported:

qunit and jasmine urlpatterns are defined in horizon/site_urls.py if 
settings.DEBUG is True.
DEBUG=True is a valid situation without dev env (it is reasonable when testing).

It is better to use another flag other than DEBUG and define url_pattern
"jasmine" and "qunit" only if the new flag is True.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1299188

Title:
  qunit and jasmine urlpatterns should be defined only for test

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  qunit and jasmine urlpatterns are defined in horizon/site_urls.py if 
settings.DEBUG is True.
  DEBUG=True is a valid situation without dev env (it is reasonable when 
testing).

  It is better to use another flag other than DEBUG and define
  url_pattern "jasmine" and "qunit" only if the new flag is True.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1299188/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288328] Re: "View details" of containers is not accurate

2014-03-28 Thread Akihiro Motoki
Multiple folks including me confirmed it works as expected, so I close
this isseu by marking it "Invalid". If it still occurs, feel free to
reopen.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1288328

Title:
  "View details" of containers is not accurate

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  From the Horizon Portal we can view the container and the objects in
  them. When attempting to view the details of the container, it always
  reports the container has "Object Count" as None and "Size" as 0
  bytes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1288328/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299159] [NEW] Hyper-V agent cannot add ICMP security group rules

2014-03-28 Thread Claudiu Belu
Public bug reported:

The Hyper-V agent throws exceptions each time it tries to add a security
group rule which has ICMP as protocol.

This is caused by the fact that Hyper-V
Msvm_EthernetSwitchPortExtendedAclSettingData can accept only TCP and
UDP as protocols.

Agent log:
http://pastebin.com/HwrczsfX

** Affects: neutron
 Importance: Undecided
 Assignee: Claudiu Belu (cbelu)
 Status: New


** Tags: hyper-v

** Changed in: neutron
 Assignee: (unassigned) => Claudiu Belu (cbelu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1299159

Title:
  Hyper-V agent cannot add ICMP security group rules

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The Hyper-V agent throws exceptions each time it tries to add a
  security group rule which has ICMP as protocol.

  This is caused by the fact that Hyper-V
  Msvm_EthernetSwitchPortExtendedAclSettingData can accept only TCP and
  UDP as protocols.

  Agent log:
  http://pastebin.com/HwrczsfX

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1299159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299156] [NEW] Hyper-V agent does not disable security group rules

2014-03-28 Thread Claudiu Belu
Public bug reported:

A new config option was introduced recently, enable_security_group. This config 
option specifies if the agent should have the security group enabled or not.
The Hyper-V agent does not take this config option into account, which means 
the security groups rules are always applied.

** Affects: neutron
 Importance: Undecided
 Assignee: Claudiu Belu (cbelu)
 Status: New


** Tags: hyper-v

** Changed in: neutron
 Assignee: (unassigned) => Claudiu Belu (cbelu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1299156

Title:
  Hyper-V agent does not disable security group rules

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  A new config option was introduced recently, enable_security_group. This 
config option specifies if the agent should have the security group enabled or 
not.
  The Hyper-V agent does not take this config option into account, which means 
the security groups rules are always applied.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1299156/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299144] [NEW] resume instance doesn't work

2014-03-28 Thread Cindy Lu
Public bug reported:

How to Reproduce

1. Suspend Instance
2. Resume Instance

You will get a success message, but the instance is still in "Suspended"
state.

Please see image.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "032814 - resume instance.png"
   
https://bugs.launchpad.net/bugs/1299144/+attachment/4049183/+files/032814%20-%20resume%20instance.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1299144

Title:
  resume instance doesn't work

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  How to Reproduce

  1. Suspend Instance
  2. Resume Instance

  You will get a success message, but the instance is still in
  "Suspended" state.

  Please see image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1299144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1213660] Re: nova-consoleauth memcached

2014-03-28 Thread Scott Devoid
Bug is in openstack.common.memorycache:
https://github.com/openstack/nova/blob/stable/grizzly/nova/openstack/common/memorycache.py

Specifically get_client should not pass on ImportError since the CONF
declared memcached_servers.

def get_client(memcached_servers=None):
client_cls = Client

if not memcached_servers:
memcached_servers = CONF.memcached_servers
if memcached_servers:
try:
import memcache
client_cls = memcache.Client
except ImportError:
pass

** Changed in: nova
   Status: Incomplete => Confirmed

** Also affects: oslo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1213660

Title:
  nova-consoleauth memcached

Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  Hello Guys,

  If memcached doesn't respond there isn't any error messages

  
https://github.com/openstack/nova/blob/stable/grizzly/nova/consoleauth/manager.py#L85

  Looking the code I found this problem in my environment but looking
  the logs seems that there is memcached.

  Regards,
  Federico

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1213660/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299151] [NEW] nova-consoleauth processes requests when disabled

2014-03-28 Thread Scott Devoid
Public bug reported:

Not sure if this is a bug or not. But nova-consoleauth will process
requests even if it is listed as disabled in the service list.

| nova-consoleauth | u9-p| internal | disabled | up  |
| nova-consoleauth | u10-p   | internal | enabled  | up|
| nova-consoleauth | u11-p   | internal | enabled  | up|
In this case I can watch as u9-p continues to process requests from the message 
bus.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Not sure if this is a bug or not. But nova-consoleauth will process
  requests even if it is listed as disabled in the service list.
  
- | nova-consoleauth | u9-p| internal | disabled | up  | 
2014-03-28T18:11:45.00 | None|
- | nova-consoleauth | u10-p   | internal | enabled  | up| 
2014-03-28T18:24:33.00 | None|
- | nova-consoleauth | u11-p   | internal | enabled  | up| 
2014-03-28T18:24:32.00 | None|
- 
- In this case I can watch as u9-p continues to process requests from the
- message bus.
+ | nova-consoleauth | u9-p| internal | disabled | up  |
+ | nova-consoleauth | u10-p   | internal | enabled  | up|
+ | nova-consoleauth | u11-p   | internal | enabled  | up|
+ In this case I can watch as u9-p continues to process requests from the 
message bus.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1299151

Title:
  nova-consoleauth processes requests when disabled

Status in OpenStack Compute (Nova):
  New

Bug description:
  Not sure if this is a bug or not. But nova-consoleauth will process
  requests even if it is listed as disabled in the service list.

  | nova-consoleauth | u9-p| internal | disabled | up  |
  | nova-consoleauth | u10-p   | internal | enabled  | up|
  | nova-consoleauth | u11-p   | internal | enabled  | up|
  In this case I can watch as u9-p continues to process requests from the 
message bus.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1299151/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299145] [NEW] Documentation Bug, BigSwitch should be "Big Switch"

2014-03-28 Thread Kanzhe Jiang
Public bug reported:

BigSwitch references should be changed to "Big Switch".

** Affects: neutron
 Importance: Undecided
 Assignee: Kanzhe Jiang (kanzhe)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kanzhe Jiang (kanzhe)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1299145

Title:
  Documentation Bug, BigSwitch should be "Big Switch"

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  BigSwitch references should be changed to "Big Switch".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1299145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299139] [NEW] Instances stuck in deleting task_state never cleaned up

2014-03-28 Thread Matt Riedemann
Public bug reported:

Bug 1248563 "Instance deletion is prevented when another component locks
up" provided a partial fix https://review.openstack.org/#/c/55444/ which
introduces another problem, which is subsequent delete requests are
ignored.

When doing Tempest 3rd party CI runs we see instances fail to build
(could be a scheduling/resource problem, timeout, whatever) and then get
stuck in deleting task_state and are never cleaned up.

The patch even says:

"Dealing with delete requests that never got executed is not in scope of
this change and will be submitted separately."

That's the bug reported here.  For example, this is several hours after
our Tempest run finished:

http://paste.openstack.org/show/74584/

There is also some history after patch 55444 merged, we had this revert
of a revert https://review.openstack.org/#/c/70187/, which got reverted
itself again later because it was causing race failures in hyper-v CI:

https://review.openstack.org/#/c/71363/

So there is a lot of half-baked code here and I haven't been able to get
a response from Stan on bug 1248563 but basically it boils down to the
original change 55444 depended on some later changes working, and those
were ultimately reverted due to race conditions breaking in the gate.

I would propose that at least for icehouse-rc1 we get the original patch
reverted since it's not a complete solution and introduces another bug.

** Affects: nova
 Importance: High
 Status: New


** Tags: api icehouse-rc-potential

** Changed in: nova
   Importance: Undecided => High

** Tags added: icehouse-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1299139

Title:
  Instances stuck in deleting task_state never cleaned up

Status in OpenStack Compute (Nova):
  New

Bug description:
  Bug 1248563 "Instance deletion is prevented when another component
  locks up" provided a partial fix
  https://review.openstack.org/#/c/55444/ which introduces another
  problem, which is subsequent delete requests are ignored.

  When doing Tempest 3rd party CI runs we see instances fail to build
  (could be a scheduling/resource problem, timeout, whatever) and then
  get stuck in deleting task_state and are never cleaned up.

  The patch even says:

  "Dealing with delete requests that never got executed is not in scope
  of this change and will be submitted separately."

  That's the bug reported here.  For example, this is several hours
  after our Tempest run finished:

  http://paste.openstack.org/show/74584/

  There is also some history after patch 55444 merged, we had this
  revert of a revert https://review.openstack.org/#/c/70187/, which got
  reverted itself again later because it was causing race failures in
  hyper-v CI:

  https://review.openstack.org/#/c/71363/

  So there is a lot of half-baked code here and I haven't been able to
  get a response from Stan on bug 1248563 but basically it boils down to
  the original change 55444 depended on some later changes working, and
  those were ultimately reverted due to race conditions breaking in the
  gate.

  I would propose that at least for icehouse-rc1 we get the original
  patch reverted since it's not a complete solution and introduces
  another bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1299139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281453] Re: Replace exception "re-raises" with excutils.save_and_reraise_exception()

2014-03-28 Thread Mark McClain
This has been covered by other work in Neutron.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1281453

Title:
  Replace exception "re-raises" with
  excutils.save_and_reraise_exception()

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  There are quite a few places in the Glance code where exceptions are
  re-raised:

try:
   some_operation()
except FooException as e:
  do_something1()
  raise
except BarException as e:
  do_something2()
  raise

  These places should use the excutils.save_and_reraise_exception class
  because in some cases the exception context can be cleared, resulting
  in None being attempted to be re-raised after an exception handler is
  run (see excutils.save_and_reraise_exception for more).

try:
   some_operation()
except FooException as e:
with excutils.save_and_reraise_exception() as ctxt:
do_something1()
except BarException as e:
with excutils.save_and_reraise_exception() as ctxt:
do_something2()

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1281453/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298981] [NEW] Skip resizing disk if the parameter resize_instance is False

2014-03-28 Thread sahid
Public bug reported:

On the libvirt driver the driver.finish_migration method is called with an extra
parameter 'resize_instance'. It should be used to know if it is
necessary or not to resize the disks.

** Affects: nova
 Importance: Wishlist
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298981

Title:
  Skip resizing disk if the parameter resize_instance is False

Status in OpenStack Compute (Nova):
  New

Bug description:
  On the libvirt driver the driver.finish_migration method is called with an 
extra
  parameter 'resize_instance'. It should be used to know if it is
  necessary or not to resize the disks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1298981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299135] [NEW] nova unit test fail as exceptions.ExternalIpAddressExhaustedClient not found

2014-03-28 Thread Bhuvan Arumugam
Public bug reported:

unit test fail likely a regression caused by
https://github.com/openstack/nova/commit/cd1423cef7621e0b557cbe0260f661d08811236b.

I'm unsure why it's not caught in the gate.

Apparently exceptions.ExternalIpAddressExhaustedClient is not defined.
Refer to attached log for more details.

traceback-1: {{{
Traceback (most recent call last):
  File 
"/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/mox.py",
 line 286, in VerifyAll
mock_obj._Verify()
  File 
"/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/mox.py",
 line 506, in _Verify
raise ExpectedMethodCallsError(self._expected_calls_queue)
ExpectedMethodCallsError: Verify: Expected methods never called:
  0.  
}}}

traceback-2: {{{
Traceback (most recent call last):
  File 
"/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/fixtures/fixture.py",
 line 112, in cleanUp
return self._cleanups(raise_errors=raise_first)
  File 
"/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/fixtures/callmany.py",
 line 88, in __call__
reraise(error[0], error[1], error[2])
  File 
"/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/fixtures/callmany.py",
 line 82, in __call__
cleanup(*args, **kwargs)
  File 
"/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/mox.py",
 line 286, in VerifyAll
mock_obj._Verify()
  File 
"/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/mox.py",
 line 506, in _Verify
raise ExpectedMethodCallsError(self._expected_calls_queue)
ExpectedMethodCallsError: Verify: Expected methods never called:
  0.  
}}}

Traceback (most recent call last):
  File "nova/tests/network/test_neutronv2.py", line 1639, in 
test_allocate_floating_ip_exhausted_fail
AndRaise(exceptions.ExternalIpAddressExhaustedClient)
AttributeError: 'module' object has no attribute 
'ExternalIpAddressExhaustedClient'

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "neutron-del.sql"
   
https://bugs.launchpad.net/bugs/1299135/+attachment/4049130/+files/neutron-del.sql

** Attachment removed: "neutron-del.sql"
   
https://bugs.launchpad.net/nova/+bug/1299135/+attachment/4049130/+files/neutron-del.sql

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1299135

Title:
  nova unit test fail as exceptions.ExternalIpAddressExhaustedClient not
  found

Status in OpenStack Compute (Nova):
  New

Bug description:
  unit test fail likely a regression caused by
  
https://github.com/openstack/nova/commit/cd1423cef7621e0b557cbe0260f661d08811236b.

  I'm unsure why it's not caught in the gate.

  Apparently exceptions.ExternalIpAddressExhaustedClient is not defined.
  Refer to attached log for more details.

  traceback-1: {{{
  Traceback (most recent call last):
File 
"/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/mox.py",
 line 286, in VerifyAll
  mock_obj._Verify()
File 
"/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/mox.py",
 line 506, in _Verify
  raise ExpectedMethodCallsError(self._expected_calls_queue)
  ExpectedMethodCallsError: Verify: Expected methods never called:
0.  
  }}}

  traceback-2: {{{
  Traceback (most recent call last):
File 
"/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/fixtures/fixture.py",
 line 112, in cleanUp
  return self._cleanups(raise_errors=raise_first)
File 
"/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/fixtures/callmany.py",
 line 88, in __call__
  reraise(error[0], error[1], error[2])
File 
"/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/fixtures/callmany.py",
 line 82, in __call__
  cleanup(*args, **kwargs)
File 
"/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/mox.py",
 line 286, in VerifyAll
  mock_obj._Verify()
File 
"/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/mox.py",
 line 506, in _Verify
  raise ExpectedMethodCallsError(self._expected_calls_queue)
  ExpectedMethodCallsError: Verify: Expected methods never called:
0.  
  }}}

  Traceback (most recent call last):
File "nova/tests/network/test_neutronv2.py", line 1639, in 
test_allocate_floating_ip_exhausted_fail
  AndRaise(exceptions.ExternalIpAddressExhaustedClient)
  AttributeError: 'module' object has no attribute 
'ExternalIpAddressExhaustedClient'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1299135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launch

[Yahoo-eng-team] [Bug 1290772] Re: MatchMakerRedis doesn't work [zeromq]

2014-03-28 Thread Mark McLoughlin
This should probably be an oslo.messaging bug report only now

I don't really have any insight as to whether there's a real bug here,
but it sounds plausible so marking Confirmed

** Changed in: nova
   Status: New => Invalid

** Tags removed: conductor console
** Tags added: zmq

** Changed in: oslo.messaging
   Status: New => Confirmed

** Changed in: oslo.messaging
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290772

Title:
  MatchMakerRedis doesn't work [zeromq]

Status in OpenStack Compute (Nova):
  Invalid
Status in Messaging API for OpenStack:
  Confirmed

Bug description:
  I was testing zeromq with redis. I installed all packages from ubuntu cloud 
repository[havana].  I added the following lines in nova.conf.
  ...
  rpc_zmq_matchmaker = 
nova.openstack.common.rpc.matchmaker_redis.MatchMakerRedis
  rpc_backend = nova.openstack.common.rpc.impl_zmq
  ...

  I get the following error
  2014-03-11 09:57:58.671 11201 ERROR nova.openstack.common.threadgroup [-] 
Command # 1 (SADD scheduler.ubuntu scheduler.ubuntu.ubuntu) of pipeline caused 
error: Operation against a key holding the wrong kind of value

  The same error is reported in the following services:
  nova-conductor
  nova-consoleauth
  nova-scheduler

  The problem seems to come from the matchmaker using the same key to
  register a set of hosts and the hosts themselves.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1290772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299039] [NEW] Token Scoping

2014-03-28 Thread Abu Shohel Ahmed
Public bug reported:

In Havana Stable release for both V2.0 an V3,

A scoped token can be used to get another scoped or un-scopped token.
This can be exploited by anyone who has gained access to  a scoped
token.

For example,

1. userA is related to two projects: Project1, Project2
2. userA creates  tokenA scoped by Project1
3.  userA  shares the tokenA to a third party (malicious).  
4. Third party can now make a token creation call to create a new tokenB scoped 
under projectB using tokenA.

Although, we know that bearer token has all or nothing property, scoping the 
token can limit the exposure. 
A scoped token should not be allowed to create another scoped token.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: security

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1299039

Title:
  Token Scoping

Status in OpenStack Identity (Keystone):
  New

Bug description:
  In Havana Stable release for both V2.0 an V3,

  A scoped token can be used to get another scoped or un-scopped token.
  This can be exploited by anyone who has gained access to  a scoped
  token.

  For example,

  1. userA is related to two projects: Project1, Project2
  2. userA creates  tokenA scoped by Project1
  3.  userA  shares the tokenA to a third party (malicious).  
  4. Third party can now make a token creation call to create a new tokenB 
scoped under projectB using tokenA.

  Although, we know that bearer token has all or nothing property, scoping the 
token can limit the exposure. 
  A scoped token should not be allowed to create another scoped token.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1299039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299046] [NEW] Remove Quantum compatibility layer

2014-03-28 Thread Mark McClain
Public bug reported:

The deprecation period is over, so now is the time to remove the Quantum
compatibility layer.

** Affects: neutron
 Importance: Low
 Assignee: Mark McClain (markmcclain)
 Status: In Progress


** Tags: neutron-core

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
Milestone: None => juno-1

** Changed in: neutron
 Assignee: (unassigned) => Mark McClain (markmcclain)

** Changed in: neutron
   Status: New => Triaged

** Tags added: neutron-core

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1299046

Title:
  Remove Quantum compatibility layer

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The deprecation period is over, so now is the time to remove the
  Quantum compatibility layer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1299046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299033] [NEW] enabled_emulation greatly reduces keystone performance

2014-03-28 Thread Matt Fischer
Public bug reported:

When enabled_emulation is enabled, the performance of Keystone suffers
greatly. I see a approx 4x slower result when it is enabled.  I
discussed this some in my blog post
(http://www.mattfischer.com/blog/?p=561) and was asked to file a bug by
Yuriy. Here are some results. Each query had about 20 results, but I've
removed them since it has private emails and what not.

enabled_emulation off:

root@j1:~# time keystone user-list
+--+--+-+---+
|  id  | name | enabled |   email   |
+--+--+-+---+
|admin |admin |   True  |   |
...
+--+--+-+---+

real0m2.767s
user0m0.380s
sys 0m0.284s


enabled_emulation on:

root@j1:~# time keystone user-list
+--+--+-+---+
|  id  | name | enabled |   email   |
+--+--+-+---+
|admin |admin |   True  |   |
...
+--+--+-+---+

real0m9.099s
user0m0.508s
sys 0m0.084s

Similar results happen for tenant enabled emulation.

My LDAP box is a Free IPA server running on CentOS if that matters.

I'm running Keystone 2013.2.2-0ubuntu1~cloud0

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1299033

Title:
  enabled_emulation greatly reduces keystone performance

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When enabled_emulation is enabled, the performance of Keystone suffers
  greatly. I see a approx 4x slower result when it is enabled.  I
  discussed this some in my blog post
  (http://www.mattfischer.com/blog/?p=561) and was asked to file a bug
  by Yuriy. Here are some results. Each query had about 20 results, but
  I've removed them since it has private emails and what not.

  enabled_emulation off:

  root@j1:~# time keystone user-list
  +--+--+-+---+
  |  id  | name | enabled |   email   |
  +--+--+-+---+
  |admin |admin |   True  |   |
  ...
  +--+--+-+---+

  real  0m2.767s
  user  0m0.380s
  sys   0m0.284s

  
  enabled_emulation on:

  root@j1:~# time keystone user-list
  +--+--+-+---+
  |  id  | name | enabled |   email   |
  +--+--+-+---+
  |admin |admin |   True  |   |
  ...
  +--+--+-+---+

  real  0m9.099s
  user  0m0.508s
  sys   0m0.084s

  Similar results happen for tenant enabled emulation.

  My LDAP box is a Free IPA server running on CentOS if that matters.

  I'm running Keystone 2013.2.2-0ubuntu1~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1299033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299130] [NEW] Encode PKI token (back port changes to Havana)

2014-03-28 Thread Priti Desai
Public bug reported:

Authenticating a user based on pre-existing PKI token is not supported
in Havana. PKI tokens are much longer and different from its id (id
column from token table). When PKI tokens are passed as token_id to POST
…/auth/tokens, it does not encode PKI token to generate its ID which is
happening in IceHouse.

Havana is missing this if statement:

if isinstance(token_id, six.text_type):
token_id = token_id.encode('utf-8')

https://github.com/openstack/keystone/blob/stable/havana/keystone/common/cms.py

if is_ans1_token(token_id):
hasher = hashlib.md5()
hasher.update(token_id)
return hasher.hexdigest()

IceHouse version:

if is_ans1_token(token_id):
hasher = hashlib.md5()
if isinstance(token_id, six.text_type):
token_id = token_id.encode('utf-8')
hasher.update(token_id)
return hasher.hexdigest()

Is it possible to backport these changes into Havana?


More info: 
https://ask.openstack.org/en/question/25971/is-there-a-rest-api-to-retrieve-token-id-id-column-from-token-table-of-an-pki-token/

** Affects: keystone
 Importance: Undecided
 Assignee: Priti Desai (priti-desai)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Priti Desai (priti-desai)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1299130

Title:
  Encode PKI token (back port changes to Havana)

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Authenticating a user based on pre-existing PKI token is not supported
  in Havana. PKI tokens are much longer and different from its id (id
  column from token table). When PKI tokens are passed as token_id to
  POST …/auth/tokens, it does not encode PKI token to generate its ID
  which is happening in IceHouse.

  Havana is missing this if statement:

  if isinstance(token_id, six.text_type):
  token_id = token_id.encode('utf-8')

  
https://github.com/openstack/keystone/blob/stable/havana/keystone/common/cms.py

  if is_ans1_token(token_id):
  hasher = hashlib.md5()
  hasher.update(token_id)
  return hasher.hexdigest()

  IceHouse version:

  if is_ans1_token(token_id):
  hasher = hashlib.md5()
  if isinstance(token_id, six.text_type):
  token_id = token_id.encode('utf-8')
  hasher.update(token_id)
  return hasher.hexdigest()

  Is it possible to backport these changes into Havana?

  
  More info: 
  
https://ask.openstack.org/en/question/25971/is-there-a-rest-api-to-retrieve-token-id-id-column-from-token-table-of-an-pki-token/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1299130/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299126] [NEW] local variable 'keypair' referenced before assignment

2014-03-28 Thread Florent Flament
Public bug reported:

When a user that isn't allowed to create a keypair tries to create one
through the Horizon dashboard, a back trace is displayed instead of the
usual nice error message.

Details:

Environment:


Request Method: GET
Request URL: 
http://localhost:8080/project/access_and_security/keypairs/az/generate/

Django Version: 1.6.2
Python Version: 2.7.3
Installed Applications:
['openstack_dashboard.dashboards.project',
 'openstack_dashboard.dashboards.admin',
 'openstack_dashboard.dashboards.settings',
 'openstack_dashboard',
 'django.contrib.contenttypes',
 'django.contrib.auth',
 'django.contrib.sessions',
 'django.contrib.messages',
 'django.contrib.staticfiles',
 'django.contrib.humanize',
 'compressor',
 'horizon',
 'openstack_auth']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
 'django.middleware.csrf.CsrfViewMiddleware',
 'django.contrib.sessions.middleware.SessionMiddleware',
 'django.contrib.auth.middleware.AuthenticationMiddleware',
 'django.contrib.messages.middleware.MessageMiddleware',
 'horizon.middleware.HorizonMiddleware',
 'django.middleware.doc.XViewMiddleware',
 'django.middleware.locale.LocaleMiddleware',
 'django.middleware.clickjacking.XFrameOptionsMiddleware')


Traceback:
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py" in 
get_response
  114. response = wrapped_callback(request, *callback_args, 
**callback_kwargs)
File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/decorators.py" 
in dec
  38. return view_func(request, *args, **kwargs)
File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/decorators.py" 
in dec
  54. return view_func(request, *args, **kwargs)
File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/decorators.py" 
in dec
  38. return view_func(request, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py" in 
view
  69. return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py" in 
dispatch
  87. return handler(request, *args, **kwargs)
File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/access_and_security/keypairs/views.py"
 in get
  78. 'attachment; filename=%s.pem' % slugify(keypair.name)

Exception Type: UnboundLocalError at 
/project/access_and_security/keypairs/az/generate/
Exception Value: local variable 'keypair' referenced before assignment

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1299126

Title:
  local variable 'keypair' referenced before assignment

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When a user that isn't allowed to create a keypair tries to create one
  through the Horizon dashboard, a back trace is displayed instead of
  the usual nice error message.

  Details:

  Environment:

  
  Request Method: GET
  Request URL: 
http://localhost:8080/project/access_and_security/keypairs/az/generate/

  Django Version: 1.6.2
  Python Version: 2.7.3
  Installed Applications:
  ['openstack_dashboard.dashboards.project',
   'openstack_dashboard.dashboards.admin',
   'openstack_dashboard.dashboards.settings',
   'openstack_dashboard',
   'django.contrib.contenttypes',
   'django.contrib.auth',
   'django.contrib.sessions',
   'django.contrib.messages',
   'django.contrib.staticfiles',
   'django.contrib.humanize',
   'compressor',
   'horizon',
   'openstack_auth']
  Installed Middleware:
  ('django.middleware.common.CommonMiddleware',
   'django.middleware.csrf.CsrfViewMiddleware',
   'django.contrib.sessions.middleware.SessionMiddleware',
   'django.contrib.auth.middleware.AuthenticationMiddleware',
   'django.contrib.messages.middleware.MessageMiddleware',
   'horizon.middleware.HorizonMiddleware',
   'django.middleware.doc.XViewMiddleware',
   'django.middleware.locale.LocaleMiddleware',
   'django.middleware.clickjacking.XFrameOptionsMiddleware')

  
  Traceback:
  File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py" in 
get_response
114. response = wrapped_callback(request, 
*callback_args, **callback_kwargs)
  File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/decorators.py" in dec
38. return view_func(request, *args, **kwargs)
  File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/decorators.py" in dec
54. return view_func(request, *args, **kwargs)
  File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/decorators.py" in dec
38. return view_func(request, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py" in 
view
69. return self.d

[Yahoo-eng-team] [Bug 1299049] [NEW] reorganize unit tests into directories instead of mangling names

2014-03-28 Thread Mark McClain
Public bug reported:

The Unit Test directory needs reorganization instead of containing most
test modules in root directory.

** Affects: neutron
 Importance: Wishlist
 Assignee: Mark McClain (markmcclain)
 Status: In Progress


** Tags: neutron-core

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1299049

Title:
  reorganize unit tests into directories instead of mangling names

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The Unit Test directory needs reorganization instead of containing
  most test modules in root directory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1299049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279739] Re: nova.cmd.rpc_zmq_receiver:main is missing

2014-03-28 Thread Mark McLoughlin
Adding devstack since that's what needs to be updated

** Changed in: nova
   Importance: Undecided => Low

** Changed in: oslo.messaging
   Status: Confirmed => Invalid

** Also affects: devstack
   Importance: Undecided
   Status: New

** Changed in: devstack
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279739

Title:
  nova.cmd.rpc_zmq_receiver:main is missing

Status in devstack - openstack dev environments:
  Confirmed
Status in OpenStack Compute (Nova):
  In Progress
Status in Messaging API for OpenStack:
  Invalid

Bug description:
  I am trying to run devstack with zeromq, but the zeromq failed.

  al/bin/nova-rpc-zmq-receiver & echo $! >/opt/stack/status/stack/zeromq.pid; 
fg || echo "zeromq failed to start" | tee 
"/opt/stack/status/stack/zeromq.failure"
  [1] 25102
  cd /opt/stack/nova && /usr/local/bin/nova-rpc-zmq-receiver
  Traceback (most recent call last):
File "/usr/local/bin/nova-rpc-zmq-receiver", line 6, in 
  from nova.cmd.rpc_zmq_receiver import main
  ImportError: No module named rpc_zmq_receiver
  zeromq failed to start

  
  I found at https://github.com/openstack/nova/blob/master/setup.cfg:
  nova-rpc-zmq-receiver = nova.cmd.rpc_zmq_receiver:main

  but at https://github.com/openstack/nova/tree/master/nova/cmd:
  we have no rpc_zmq_receiver module at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1279739/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293627] Re: When a VM is in suspended status, and the host(a dedicated compute node) get rebooted, the VM can not be resumed after the host is back to active.

2014-03-28 Thread Akihiro Motoki
Horizon does not support rebooting a compute node. It is apparently NOT related 
to Horizon.
Horizon just retrieves status from Nova. If you still have an issue, please 
report it to nova (or cinder if it only happens combined with attached volume).

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1293627

Title:
  When a VM is in suspended status, and the host(a dedicated compute
  node) get rebooted, the VM can not be resumed after the host is back
  to active.

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  The VM is a ubuntu server 12.04 LTS with one cinder-volume attached.
  after I suspended the VM, reboot the host ( which is a dedicated
  compute node), the VM can't be resumed when the host is back.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1293627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1112912] Re: get_firewall_required should use VIF parameter from neutron

2014-03-28 Thread Thierry Carrez
Is this exploitable?

** Also affects: ossa
   Importance: Undecided
   Status: New

** Changed in: ossa
   Status: New => Incomplete

** Information type changed from Public to Public Security

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1112912

Title:
  get_firewall_required should use VIF parameter from neutron

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Security Advisories:
  Incomplete
Status in Tempest:
  In Progress

Bug description:
  This bug report is from the discussion of
  https://review.openstack.org/#/c/19126/17/nova/virt/libvirt/vif.py

  I'm going to file this as a bug for tracking issue

  The patch introduces get_firewall_required function.
  But the patch checks only conf file.
  This should use quantum VIF parameter.
  
https://github.com/openstack/quantum/blob/master/quantum/plugins/openvswitch/ovs_quantum_plugin.py#L513

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1112912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299095] [NEW] Location header is set on HTTP 200

2014-03-28 Thread Dave Walker
Public bug reported:

Glance seems to be setting the http header of Location, when no redirect
is intended.  The contents of the Location field are equal to that of
the request.

I haven't been able to work out the reasoning of this, but it has
extended consqences of some webstacks and proxies seeing the Location
field, and turning it into a 302 (which glanceclient et al then follows,
causing a redirect loop).

Is this Location field used for anything meaningful?

To contrast, another Project has seen similar behavior here:
http://bitten.edgewall.org/ticket/607

Their interpenetration of RFC 2616 §14.30  is such that Glance's
behavior is incompatible with the RFC.

As example:

>From Glance running directly:
$ curl -i -X HEAD -H 'X-Auth-Token: a2dbc60c0b7641578215f4fb814ab33f' -H 
'Content-Type: application/octe
t-stream' -H 'User-Agent: python-glanceclient' 
http://{REDACTED}/v1/images/2db2f647-866c-4cb8-8bf3-
646d20f6ee4c
HTTP/1.0 200 OK
Date: Fri, 28 Mar 2014 12:52:40 GMT
Server: WSGIServer/0.1 Python/2.7.3
Content-Type: text/html; charset=UTF-8
Content-Length: 0
x-image-meta-property-ramdisk_id: 401ec901-b01d-4bc7-96fb-660143a6d456
x-image-meta-id: 2db2f647-866c-4cb8-8bf3-646d20f6ee4c
x-image-meta-deleted: False
x-image-meta-container_format: ami
x-image-meta-checksum: f8a22dc65b3d9b6e63678955bd83
x-image-meta-protected: False
x-image-meta-min_disk: 0
x-image-meta-created_at: 2013-12-05T10:54:02
x-image-meta-size: 25165824
x-image-meta-status: active
x-image-meta-is_public: True
x-image-meta-min_ram: 0
x-image-meta-property-kernel_id: ff846cac-e6f6-49fc-a44e-69be33462c5b
x-image-meta-owner: e72df10b1afb49d2979d75bd00074365
x-image-meta-updated_at: 2013-12-05T10:54:02
x-image-meta-disk_format: ami
x-image-meta-name: cirros-0.3.1-x86_64-uec
Location: http://{REDACTED}/v1/images/2db2f647-866c-4cb8-8bf3-646d20f6ee4c
ETag: f8a22dc65b3d9b6e63678955bd83
x-openstack-request-id: req-77053d34-51f9-41b6-8ab1-d9693fc751d4

>From Glance running behind apache+fcgid+flup:
$ curl -i \
>  -X HEAD \
>  -H 'X-Auth-Token: a2dbc60c0b7641578215f4fb814ab33f' \
>  -H 'Content-Type: application/octet-stream' \
>  -H 'User-Agent: python-glanceclient' 
> http://{REDACTED}/v1/images/2db2f647-866c-4cb8-8bf3-646d20f6ee4c
HTTP/1.1 302 Found
Date: Fri, 28 Mar 2014 12:58:28 GMT
Server: Apache
X-MS-Unique-Id: UzVx9ArGan4AACOJHOQF
x-image-meta-property-ramdisk_id: 401ec901-b01d-4bc7-96fb-660143a6d456
x-image-meta-id: 2db2f647-866c-4cb8-8bf3-646d20f6ee4c
x-image-meta-deleted: False
x-image-meta-container_format: ami
x-image-meta-checksum: f8a22dc65b3d9b6e63678955bd83
x-image-meta-protected: False
x-image-meta-min_disk: 0
x-image-meta-created_at: 2013-12-05T10:54:02
x-image-meta-size: 25165824
x-image-meta-status: active
x-image-meta-is_public: True
x-image-meta-min_ram: 0
x-image-meta-property-kernel_id: ff846cac-e6f6-49fc-a44e-69be33462c5b
x-image-meta-owner: e72df10b1afb49d2979d75bd00074365
x-image-meta-updated_at: 2013-12-05T10:54:02
x-image-meta-disk_format: ami
x-image-meta-name: cirros-0.3.1-x86_64-uec
ETag: f8a22dc65b3d9b6e63678955bd83
x-openstack-request-id: req-f0506f3d-c510-4a3f-9464-a37c82513811
Location: http://{REDACTED}/v1/images/2db2f647-866c-4cb8-8bf3-646d20f6ee4c
Content-Type: text/html; charset=iso-8859-1

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1299095

Title:
  Location header is set on HTTP 200

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Glance seems to be setting the http header of Location, when no
  redirect is intended.  The contents of the Location field are equal to
  that of the request.

  I haven't been able to work out the reasoning of this, but it has
  extended consqences of some webstacks and proxies seeing the Location
  field, and turning it into a 302 (which glanceclient et al then
  follows, causing a redirect loop).

  Is this Location field used for anything meaningful?

  To contrast, another Project has seen similar behavior here:
  http://bitten.edgewall.org/ticket/607

  Their interpenetration of RFC 2616 §14.30  is such that Glance's
  behavior is incompatible with the RFC.

  As example:

  From Glance running directly:
  $ curl -i -X HEAD -H 'X-Auth-Token: a2dbc60c0b7641578215f4fb814ab33f' -H 
'Content-Type: application/octe
  t-stream' -H 'User-Agent: python-glanceclient' 
http://{REDACTED}/v1/images/2db2f647-866c-4cb8-8bf3-
  646d20f6ee4c
  HTTP/1.0 200 OK
  Date: Fri, 28 Mar 2014 12:52:40 GMT
  Server: WSGIServer/0.1 Python/2.7.3
  Content-Type: text/html; charset=UTF-8
  Content-Length: 0
  x-image-meta-property-ramdisk_id: 401ec901-b01d-4bc7-96fb-660143a6d456
  x-image-meta-id: 2db2f647-866c-4cb8-8bf3-646d20f6ee4c
  x-image-meta-deleted: False
  x-image-meta-container_format: ami
  x-image-me

[Yahoo-eng-team] [Bug 1298991] [NEW] Duplicated colon in _container_metadata.html

2014-03-28 Thread Łukasz Jernaś
Public bug reported:

There's a duplicated colon in _container_metadata.html

{% trans "Size: " %}: {{
container.container_bytes_used|filesizeformat }}

As https://bugs.launchpad.net/horizon/+bug/1296075 will require more
work, I'm reporting it as a seperate issue

** Affects: horizon
 Importance: Undecided
 Assignee: Łukasz Jernaś (deejay1)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1298991

Title:
  Duplicated colon in _container_metadata.html

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  There's a duplicated colon in _container_metadata.html

  {% trans "Size: " %}: {{
  container.container_bytes_used|filesizeformat }}

  As https://bugs.launchpad.net/horizon/+bug/1296075 will require more
  work, I'm reporting it as a seperate issue

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1298991/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297465] Re: ChunkReader has no len() Swiftclient + Glance

2014-03-28 Thread Michael Petersen
** Also affects: python-swiftclient
   Importance: Undecided
   Status: New

** No longer affects: swift

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1297465

Title:
  ChunkReader has no len() Swiftclient + Glance

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Python client library for Swift:
  New

Bug description:
  On CentOS 6.5

  Name: openstack-glance
  Arch: noarch
  Version : 2013.2.2
  Release : 2.el6
  Size: 52 k
  Repo: installed
  From repo   : openstack-havana

  When uploading an image that is larger than swifts chunk size I
  receive a ChunkReader error:

  2014-03-25 18:29:16.977 21092 TRACE glance.api.v1.upload_utils
  TypeError: object of type 'ChunkReader' has no len()

  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift TypeError:
  object of type 'ChunkReader' has no len()

  Full Traceback:

  2014-03-25 18:29:16.916 21092 ERROR glance.store.swift 
[34633528-d86d-4055-bdc9-1bcdf872fc2b OpenStack Admin 
57813631e9e5420589216b33925ef6a3] Error during chunked upload to backend, 
deleting stale chunks
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift Traceback (most recent 
call last):
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
"/usr/lib/python2.6/site-packages/glance/store/swift.py", line 384, in add
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift 
content_length=content_length)
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
"/usr/lib/python2.6/site-packages/swiftclient/client.py", line 1318, in 
put_object
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift 
response_dict=response_dict)
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
"/usr/lib/python2.6/site-packages/swiftclient/client.py", line 1192, in _retry
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift rv = 
func(self.url, self.token, *args, **kwargs)
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
"/usr/lib/python2.6/site-packages/swiftclient/client.py", line 943, in 
put_object
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift 
conn.putrequest(path, headers=headers, data=contents)
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
"/usr/lib/python2.6/site-packages/swiftclient/client.py", line 197, in 
putrequest
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift return 
self.request('PUT', full_path, data, headers, files)
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
"/usr/lib/python2.6/site-packages/swiftclient/client.py", line 187, in request
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift files=files, 
**self.requests_args)
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
"/usr/lib/python2.6/site-packages/swiftclient/client.py", line 176, in _request
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift return 
requests.request(*arg, **kwarg)
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
"/usr/lib/python2.6/site-packages/requests/api.py", line 44, in request
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift return 
session.request(method=method, url=url, **kwargs)
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
"/usr/lib/python2.6/site-packages/requests/sessions.py", line 276, in request
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift prep = 
req.prepare()
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
"/usr/lib/python2.6/site-packages/requests/models.py", line 224, in prepare
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift 
p.prepare_body(self.data, self.files)
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift   File 
"/usr/lib/python2.6/site-packages/requests/models.py", line 384, in prepare_body
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift 
self.headers['Content-Length'] = str(len(body))
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift TypeError: object of 
type 'ChunkReader' has no len()
  2014-03-25 18:29:16.916 21092 TRACE glance.store.swift 
  2014-03-25 18:29:16.977 21092 ERROR glance.api.v1.upload_utils 
[34633528-d86d-4055-bdc9-1bcdf872fc2b OpenStack Admin 
57813631e9e5420589216b33925ef6a3] Failed to upload image 
375a5784-8911-433a-a2c2-56ea0c621eda
  2014-03-25 18:29:16.977 21092 TRACE glance.api.v1.upload_utils Traceback 
(most recent call last):
  2014-03-25 18:29:16.977 21092 TRACE glance.api.v1.upload_utils   File 
"/usr/lib/python2.6/site-packages/glance/api/v1/upload_utils.py", line 101, in 
upload_data_to_store
  2014-03-25 18:29:16.977 21092 TRACE glance.api.v1.upload_utils store)
  2014-03-25 18:29:16.977 21092 TRACE glance.api.v1.upload_utils   File 
"/usr/lib/python2.6/site-packages/glance/store/__init__.py", line 333, in 
store_add_to_backend
  2014-03-25 18:29:16.977 21092 TRACE glan

[Yahoo-eng-team] [Bug 1298976] [NEW] Be sure converted image will be restored

2014-03-28 Thread sahid
Public bug reported:

On the driver libvirt. During the process of resizing disk if an image is qcow2 
with
partition less the process converts the instance to raw.

After the extend we should restore the original format in all cases
not only if 'use_cow_images' is configured to True.

** Affects: nova
 Importance: Low
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298976

Title:
  Be sure converted image will be restored

Status in OpenStack Compute (Nova):
  New

Bug description:
  On the driver libvirt. During the process of resizing disk if an image is 
qcow2 with
  partition less the process converts the instance to raw.

  After the extend we should restore the original format in all cases
  not only if 'use_cow_images' is configured to True.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1298976/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298975] [NEW] libvirt.finish_migration is too large and not tested

2014-03-28 Thread sahid
Public bug reported:

This method needs to be spitted in several small methods then each
methods has to be tested.

A possible solution could be:
  * determines the disk size from instance properties
  * methods to convert disk from qcow2 to raw and raw to qcow2
  * method to resize the disk

** Affects: nova
 Importance: Wishlist
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress


** Tags: libvirt testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298975

Title:
  libvirt.finish_migration is too large and not tested

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  This method needs to be spitted in several small methods then each
  methods has to be tested.

  A possible solution could be:
* determines the disk size from instance properties
* methods to convert disk from qcow2 to raw and raw to qcow2
* method to resize the disk

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1298975/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298934] [NEW] Cannot understand "Gigabytes" in default quotas table

2014-03-28 Thread Akihiro Motoki
Public bug reported:

In the default quotas table, we see the entry "Gigabytes".
It comes from Cinder and it is reasonable as the output from Cinder.
However Horizon displays quotas values from various projects and "Gigabytes" is 
hard to understand.
I would suggest to change it to "Volume Gigabytes".

Similarly, "Snapshots" means "Volume Snapshots". it is better to renamed
to "Volume Snapshots".

** Affects: horizon
 Importance: Low
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: i18n

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1298934

Title:
  Cannot understand "Gigabytes" in default quotas table

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the default quotas table, we see the entry "Gigabytes".
  It comes from Cinder and it is reasonable as the output from Cinder.
  However Horizon displays quotas values from various projects and "Gigabytes" 
is hard to understand.
  I would suggest to change it to "Volume Gigabytes".

  Similarly, "Snapshots" means "Volume Snapshots". it is better to
  renamed to "Volume Snapshots".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1298934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298918] [NEW] unit tests causes http-requests to domain broker

2014-03-28 Thread Erno Kuvaja
Public bug reported:

glance/tests/unit/v1 tests causes bunch of http-requests originally
directed to glance.com. The domain seems to be parked to domainbroker
and returns advertisement page from:
http://return.uk.domainnamesales.com/return_js.php?d=glance.com&s=1396003096.

we should avoid our tests to yell out to world who, where and when are
running these tests.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1298918

Title:
  unit tests causes http-requests to domain broker

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  glance/tests/unit/v1 tests causes bunch of http-requests originally
  directed to glance.com. The domain seems to be parked to domainbroker
  and returns advertisement page from:
  http://return.uk.domainnamesales.com/return_js.php?d=glance.com&s=1396003096.

  we should avoid our tests to yell out to world who, where and when are
  running these tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1298918/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298332] Re: Tests using Django-1.6 fail

2014-03-28 Thread Matthias Runge
could hunt this down to a too ancient version of django-nose

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1298332

Title:
  Tests using Django-1.6 fail

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  after installing Django-1.6, while Django-1.5 works flawless:

  [mrunge@sofja horizon (master)]$ ./run_tests.sh -N
  Running Horizon application tests
  Traceback (most recent call last):
File "/home/mrunge/work/horizon/manage.py", line 23, in 
  execute_from_command_line(sys.argv)
File "/usr/lib/python2.7/site-packages/django/core/management/__init__.py", 
line 399, in execute_from_command_line
  utility.execute()
File "/usr/lib/python2.7/site-packages/django/core/management/__init__.py", 
line 392, in execute
  self.fetch_command(subcommand).run_from_argv(self.argv)
File 
"/usr/lib/python2.7/site-packages/django/core/management/commands/test.py", 
line 50, in run_from_argv
  super(Command, self).run_from_argv(argv)
File "/usr/lib/python2.7/site-packages/django/core/management/base.py", 
line 238, in run_from_argv
  parser = self.create_parser(argv[0], argv[1])
File 
"/usr/lib/python2.7/site-packages/django/core/management/commands/test.py", 
line 59, in create_parser
  option_list=options)
File "/usr/lib64/python2.7/optparse.py", line 1219, in __init__
  add_help=add_help_option)
File "/usr/lib64/python2.7/optparse.py", line 1261, in _populate_option_list
  self.add_options(option_list)
File "/usr/lib64/python2.7/optparse.py", line 1039, in add_options
  self.add_option(option)
File "/usr/lib64/python2.7/optparse.py", line 1020, in add_option
  self._check_conflict(option)
File "/usr/lib64/python2.7/optparse.py", line 995, in _check_conflict
  option)
  optparse.OptionConflictError: option -p/--pattern: conflicting option 
string(s): -p

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1298332/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298907] [NEW] neutron CLI returns CSV with CR character

2014-03-28 Thread Jacek Nykis
Public bug reported:

I noticed that neutron commands return windows style new lines when used with 
"-f csv" option:
$ neutron net-list -c id -f csv|head -1|od -A x -t x1z -v
00 22 69 64 22 0d 0a>"id"..<
06

This causes problems in bash loops:
$ set -x
$ for i in $(neutron net-list -c id -f csv);do neutron net-show $i;done

+ for i in '$(neutron net-list -c id -f csv)'
' neutron net-show '"xxx"
'nable to find network with name '"xxx"

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1298907

Title:
  neutron CLI returns CSV with CR character

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I noticed that neutron commands return windows style new lines when used with 
"-f csv" option:
  $ neutron net-list -c id -f csv|head -1|od -A x -t x1z -v
  00 22 69 64 22 0d 0a>"id"..<
  06

  This causes problems in bash loops:
  $ set -x
  $ for i in $(neutron net-list -c id -f csv);do neutron net-show $i;done
  
  + for i in '$(neutron net-list -c id -f csv)'
  ' neutron net-show '"xxx"
  'nable to find network with name '"xxx"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1298907/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298791] Re: File "rfc.sh" is missing in neutron code - https://github.com/openstack/neutron.git

2014-03-28 Thread Darragh O'Reilly
you should just need to install the git-review package

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1298791

Title:
  File "rfc.sh" is missing in neutron code -
  https://github.com/openstack/neutron.git

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I am working on a neutron bug

  Inorder to push my changes I have followed the below steps

  
  1)Cloned the neutron code from - https://github.com/openstack/neutron.git
  2) I have done with my code changes and gave git commit.
  3) After that when i give git review . I found rfc.sh file is missing

  "git review"-
  -
  $ git review
  fatal: Not a git repository (or any of the parent directories): .git
  sh: 0: Can't open /tools/rfc.sh
  fatal: While expanding alias 'review': 'sh `git rev-parse 
--show-toplevel`/tools/rfc.sh': No such file or directory
  --

  Workaround:

  Copied rfc.sh file from
  https://github.com/asomya/quantum/blob/master/tools/rfc.sh. Then "git
  review" is executed successfully

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1298791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298865] [NEW] NVP advanced service plugin should check router status before deploying a service

2014-03-28 Thread berlin
Public bug reported:

With NVP advanced service plugin, router creation is asynchronous while all 
service call is synchronous, so it is possible that advanced service request is 
called before edge deployment completed.
One solution is to check the router status before deploying an advanced 
service. If the router is not in ACTIVE status, the service deployment request 
would return "Router not ready" error.

** Affects: neutron
 Importance: Undecided
 Assignee: berlin (linb)
 Status: Confirmed


** Tags: nicira

** Changed in: neutron
 Assignee: (unassigned) => berlin (linb)

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1298865

Title:
  NVP advanced service plugin should check router status before
  deploying a service

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  With NVP advanced service plugin, router creation is asynchronous while all 
service call is synchronous, so it is possible that advanced service request is 
called before edge deployment completed.
  One solution is to check the router status before deploying an advanced 
service. If the router is not in ACTIVE status, the service deployment request 
would return "Router not ready" error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1298865/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298818] [NEW] DATETIME expression in api_samples_test_base.py format is not sufficient

2014-03-28 Thread Ghanshyam
Public bug reported:

In Nova API Sample file testing, 
https://github.com/openstack/nova/blob/master/nova/tests/integrated/api_samples_test_base.py#L272
  file has the expression for DATETIME in 'timestamp' variable as mentioned 
below- 
 '\d{4}-[0,1]\d-[0-3]\d[ ,T]'
 '\d{2}:\d{2}:\d{2}'
 '(Z|(\+|-)\d{2}:\d{2}|\.\d{6}|)',
Which is right enough to check the existing written API test. But It needs to 
be extended where DATETIME can come in below format-
 "2014-03-28 07:05:11.726547+00:00"
where Time Zone can come with microseconds also (.726547+00:00). This 
combination is not supported in existing expression.

Currently above mentioned DATETIME format comes in NOVA V2 GET
/keypairs/ API XML response. Its API sample file and their
test is not written yet thats why no failure in test results.

We need to extend the DATETIME expression to accompalish the format
return by above API.

** Affects: nova
 Importance: Undecided
 Assignee: Ghanshyam (ghanshyammann)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Ghanshyam (ghanshyammann)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298818

Title:
  DATETIME expression in api_samples_test_base.py format is not
  sufficient

Status in OpenStack Compute (Nova):
  New

Bug description:
  In Nova API Sample file testing, 
https://github.com/openstack/nova/blob/master/nova/tests/integrated/api_samples_test_base.py#L272
  file has the expression for DATETIME in 'timestamp' variable as mentioned 
below- 
   '\d{4}-[0,1]\d-[0-3]\d[ ,T]'
   '\d{2}:\d{2}:\d{2}'
   '(Z|(\+|-)\d{2}:\d{2}|\.\d{6}|)',
  Which is right enough to check the existing written API test. But It needs to 
be extended where DATETIME can come in below format-
   "2014-03-28 07:05:11.726547+00:00"
  where Time Zone can come with microseconds also (.726547+00:00). This 
combination is not supported in existing expression.

  Currently above mentioned DATETIME format comes in NOVA V2 GET
  /keypairs/ API XML response. Its API sample file and their
  test is not written yet thats why no failure in test results.

  We need to extend the DATETIME expression to accompalish the format
  return by above API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1298818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp