[Yahoo-eng-team] [Bug 1511539] [NEW] libvirt evacute on ppcle failed with IDE controllers are unsupported for this QEMU binary or machine type

2015-10-29 Thread Christine Wang
Public bug reported:

This is on a liberty release
In evacuate, the image_meta is empty. So, we would get the architecture 
information from host.

However, in the nova/virt/libvirt/blockinfo.py
get_disk_bus_for_device_type, we didn't have the bus type default for
ppcle or ppcle64. So, it ended up using IDE for cdrom or disk.

So, the evacuate would failed with

2015-10-26 22:23:51.413 103536 ERROR nova.compute.manager [instance: 
3c8f8d24-ebcf-425a-b50d-4ddc08e7b92f] rv = execute(f, *args, **kwargs)
2015-10-26 22:23:51.413 103536 ERROR nova.compute.manager [instance: 
3c8f8d24-ebcf-425a-b50d-4ddc08e7b92f]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
2015-10-26 22:23:51.413 103536 ERROR nova.compute.manager [instance: 
3c8f8d24-ebcf-425a-b50d-4ddc08e7b92f] six.reraise(c, e, tb)
2015-10-26 22:23:51.413 103536 ERROR nova.compute.manager [instance: 
3c8f8d24-ebcf-425a-b50d-4ddc08e7b92f]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
2015-10-26 22:23:51.413 103536 ERROR nova.compute.manager [instance: 
3c8f8d24-ebcf-425a-b50d-4ddc08e7b92f] rv = meth(*args, **kwargs)
2015-10-26 22:23:51.413 103536 ERROR nova.compute.manager [instance: 
3c8f8d24-ebcf-425a-b50d-4ddc08e7b92f]   File 
"/usr/lib64/python2.7/site-packages/libvirt.py", line 996, in createWithFlags
2015-10-26 22:23:51.413 103536 ERROR nova.compute.manager [instance: 
3c8f8d24-ebcf-425a-b50d-4ddc08e7b92f] if ret == -1: raise libvirtError 
('virDomainCreateWithFlags() failed', dom=self)
2015-10-26 22:23:51.413 103536 ERROR nova.compute.manager [instance: 
3c8f8d24-ebcf-425a-b50d-4ddc08e7b92f] libvirtError: unsupported configuration: 
IDE controllers are unsupported for this QEMU binary or machine type
2015-10-26 22:23:51.413 103536 ERROR nova.compute.manager [instance: 
3c8f8d24-ebcf-425a-b50d-4ddc08e7b92f]


The if guestarch in (arch.PPC, arch.PPC64, arch.S390, arch.S390X): line needs 
to be updated to
if guestarch in (arch.PPC, arch.PPC64, arch.PPCLE, arch.PPC64LE, arch.S390, 
arch.S390X):


 nova/virt/libvirt/blockinfo.py get_disk_bus_for_device_type
...
elif virt_type in ("qemu", "kvm"):
if device_type == "cdrom":
guestarch = libvirt_utils.get_arch(image_meta)
if guestarch in (arch.PPC, arch.PPC64, arch.S390, arch.S390X):
return "scsi"
else:
return "ide"
elif device_type == "disk":
return "virtio"
elif device_type == "floppy":
return "fdc"

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1511539

Title:
  libvirt evacute on ppcle failed with IDE controllers are unsupported
  for this QEMU binary or machine type

Status in OpenStack Compute (nova):
  New

Bug description:
  This is on a liberty release
  In evacuate, the image_meta is empty. So, we would get the architecture 
information from host.

  However, in the nova/virt/libvirt/blockinfo.py
  get_disk_bus_for_device_type, we didn't have the bus type default for
  ppcle or ppcle64. So, it ended up using IDE for cdrom or disk.

  So, the evacuate would failed with

  2015-10-26 22:23:51.413 103536 ERROR nova.compute.manager [instance: 
3c8f8d24-ebcf-425a-b50d-4ddc08e7b92f] rv = execute(f, *args, **kwargs)
  2015-10-26 22:23:51.413 103536 ERROR nova.compute.manager [instance: 
3c8f8d24-ebcf-425a-b50d-4ddc08e7b92f]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
  2015-10-26 22:23:51.413 103536 ERROR nova.compute.manager [instance: 
3c8f8d24-ebcf-425a-b50d-4ddc08e7b92f] six.reraise(c, e, tb)
  2015-10-26 22:23:51.413 103536 ERROR nova.compute.manager [instance: 
3c8f8d24-ebcf-425a-b50d-4ddc08e7b92f]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
  2015-10-26 22:23:51.413 103536 ERROR nova.compute.manager [instance: 
3c8f8d24-ebcf-425a-b50d-4ddc08e7b92f] rv = meth(*args, **kwargs)
  2015-10-26 22:23:51.413 103536 ERROR nova.compute.manager [instance: 
3c8f8d24-ebcf-425a-b50d-4ddc08e7b92f]   File 
"/usr/lib64/python2.7/site-packages/libvirt.py", line 996, in createWithFlags
  2015-10-26 22:23:51.413 103536 ERROR nova.compute.manager [instance: 
3c8f8d24-ebcf-425a-b50d-4ddc08e7b92f] if ret == -1: raise libvirtError 
('virDomainCreateWithFlags() failed', dom=self)
  2015-10-26 22:23:51.413 103536 ERROR nova.compute.manager [instance: 
3c8f8d24-ebcf-425a-b50d-4ddc08e7b92f] libvirtError: unsupported configuration: 
IDE controllers are unsupported for this QEMU binary or machine type
  2015-10-26 22:23:51.413 103536 ERROR nova.compute.manager [instance: 
3c8f8d24-ebcf-425a-b50d-4ddc08e7b92f]

  
  The if guestarch in (arch.PPC, arch.PPC64, arch.S390, arch.S390X): line needs 
to be updated to
  if guestarch in (arch.PPC, 

[Yahoo-eng-team] [Bug 1502961] [NEW] libvirt spawn with configdrive failed with UnboundLocalError

2015-10-05 Thread Christine Wang
Public bug reported:

If we create a libvirt sever that requires configdrive, it will fail
with follow error:

2015-10-05 09:02:50.714 30941 ERROR nova.compute.manager [instance: 
ada89410-3ce1-44bf-b1e5-f3a0a9dfd846]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2437, in 
spawn
2015-10-05 09:02:50.714 30941 ERROR nova.compute.manager [instance: 
ada89410-3ce1-44bf-b1e5-f3a0a9dfd846] admin_pass=admin_password)
2015-10-05 09:02:50.714 30941 ERROR nova.compute.manager [instance: 
ada89410-3ce1-44bf-b1e5-f3a0a9dfd846]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2958, in 
_create_image
2015-10-05 09:02:50.714 30941 ERROR nova.compute.manager [instance: 
ada89410-3ce1-44bf-b1e5-f3a0a9dfd846] 
config_drive_image.import_file_cleanup(configdrive_path)
2015-10-05 09:02:50.714 30941 ERROR nova.compute.manager [instance: 
ada89410-3ce1-44bf-b1e5-f3a0a9dfd846] UnboundLocalError: local variable 
'config_drive_image' referenced before assignment

This problem is seen with liberty 20150920

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1502961

Title:
  libvirt spawn with configdrive failed with UnboundLocalError

Status in OpenStack Compute (nova):
  New

Bug description:
  If we create a libvirt sever that requires configdrive, it will fail
  with follow error:

  2015-10-05 09:02:50.714 30941 ERROR nova.compute.manager [instance: 
ada89410-3ce1-44bf-b1e5-f3a0a9dfd846]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2437, in 
spawn
  2015-10-05 09:02:50.714 30941 ERROR nova.compute.manager [instance: 
ada89410-3ce1-44bf-b1e5-f3a0a9dfd846] admin_pass=admin_password)
  2015-10-05 09:02:50.714 30941 ERROR nova.compute.manager [instance: 
ada89410-3ce1-44bf-b1e5-f3a0a9dfd846]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2958, in 
_create_image
  2015-10-05 09:02:50.714 30941 ERROR nova.compute.manager [instance: 
ada89410-3ce1-44bf-b1e5-f3a0a9dfd846] 
config_drive_image.import_file_cleanup(configdrive_path)
  2015-10-05 09:02:50.714 30941 ERROR nova.compute.manager [instance: 
ada89410-3ce1-44bf-b1e5-f3a0a9dfd846] UnboundLocalError: local variable 
'config_drive_image' referenced before assignment

  This problem is seen with liberty 20150920

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1502961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501831] [NEW] Evacuate libvirt instance failed with error 'Cannot load 'disk_format' in the base class'

2015-10-01 Thread Christine Wang
Public bug reported:

openstack-nova-12.0.0-201509202117

When evacuate a libvirt instance, it failed with the following error:
NotImplementedError: Cannot load 'disk_format' in the base class

2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2431, in 
spawn
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher 
block_device_info)
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/blockinfo.py", line 630, in 
get_disk_info
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher rescue)
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/blockinfo.py", line 537, in 
get_disk_mapping
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher disk_bus, 
cdrom_bus, root_device_name)
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/blockinfo.py", line 432, in 
get_root_info
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher if 
image_meta.disk_format == 'iso':
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 66, in 
getter
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher 
self.obj_load_attr(name)
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 555, in 
obj_load_attr
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher _("Cannot 
load '%s' in the base class") % attrname)
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher 
NotImplementedError: Cannot load 'disk_format' in the base class
2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher

When libvirt instance is evacuated, the image_meta is passed in with {}.
So, the disk_format is not populated with the ImageMeta object.

It's unclear to me what's the right way to fix this issue. Should change
ImageMeta's from_dict to make sure 'disk_format' is always populated or
we should add obj_load_attr method to ImageMeta

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501831

Title:
  Evacuate libvirt instance failed with error 'Cannot load 'disk_format'
  in the base class'

Status in OpenStack Compute (nova):
  New

Bug description:
  openstack-nova-12.0.0-201509202117

  When evacuate a libvirt instance, it failed with the following error:
  NotImplementedError: Cannot load 'disk_format' in the base class

  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2431, in 
spawn
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher 
block_device_info)
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/blockinfo.py", line 630, in 
get_disk_info
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher rescue)
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/blockinfo.py", line 537, in 
get_disk_mapping
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher 
disk_bus, cdrom_bus, root_device_name)
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/blockinfo.py", line 432, in 
get_root_info
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher if 
image_meta.disk_format == 'iso':
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 66, in 
getter
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher 
self.obj_load_attr(name)
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 555, in 
obj_load_attr
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher 
_("Cannot load '%s' in the base class") % attrname)
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher 
NotImplementedError: Cannot load 'disk_format' in the base class
  2015-09-30 08:04:47.484 19026 ERROR oslo_messaging.rpc.dispatcher

  When libvirt instance is evacuated, the image_meta is passed in with
  {}. So, the disk_format is not populated with the ImageMeta object.

  It's unclear to me what's the right way to fix this issue. Should
  change ImageMeta's from_dict to make sure 

[Yahoo-eng-team] [Bug 1431652] Re: os-volume_attachments return 500 error code instead of 404 if invalid volume is specified

2015-04-26 Thread Christine Wang
** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431652

Title:
  os-volume_attachments return 500 error code instead of 404 if invalid
  volume is specified

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  If I do a DELETE of os-volume_attachments with invalid volume, 500
  error code is being returned instead of 404.

  The problem is at volume = self.volume_api.get(context, volume_id)
  where NotFound exception is  not being handled. This problem is fixed
  in v3 API.

  2015-03-12 08:49:19.146 20273 INFO nova.osapi_compute.wsgi.server 
[req-001f6e6e-4726-4738-a3e7-74c5c7eaaac5 None] 9.114.193.249,127.0.0.1 DELETE 
/v2/dd069270f6634cafaf66777c4a2ee137/servers/e44ee780-0b57-4bcb-89ef-ab99e4d7d1a0/os-volume_attachments/volume-815308985
 HTTP/1.1 status: 500 len: 295 time: 0.6408780
  ...
  2015-03-12 08:49:18.969 20273 ERROR nova.api.openstack 
[req-001f6e6e-4726-4738-a3e7-74c5c7eaaac5 None] Caught error: Not Found (HTTP 
404) (Request-ID: req-8d133de9-430e-41ad-819a-3f9685deed29)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack Traceback (most recent 
call last):
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/nova/api/openstack/__init__.py, line 125, in 
__call__
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
req.get_response(self.application)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/webob/request.py, line 1296, in send
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/webob/request.py, line 1260, in 
call_application
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/webob/dec.py, line 144, in __call__
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token.py, line 748, 
in __call__
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token.py, line 684, 
in _call_app
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/webob/dec.py, line 144, in __call__
  ...
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/nova/api/openstack/compute/contrib/volumes.py,
 line 398, in delete
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack volume = 
self.volume_api.get(context, volume_id)
  ...
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack item = 
cinder.cinderclient(context).volumes.get(volume_id)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py, line 227, in get
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
self._get(/volumes/%s % volume_id, volume)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/cinderclient/base.py, line 149, in _get
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack resp, body = 
self.api.client.get(url)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/cinderclient/client.py, line 88, in get
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
self._cs_request(url, 'GET', **kwargs)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/cinderclient/client.py, line 85, in 
_cs_request
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
self.request(url, method, **kwargs)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/cinderclient/client.py, line 80, in request
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
super(SessionClient, self).request(*args, **kwargs)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/keystoneclient/adapter.py, line 166, in 
request
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack resp = 
super(LegacyJsonAdapter, self).request(*args, **kwargs)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/keystoneclient/adapter.py, line 

[Yahoo-eng-team] [Bug 1445674] [NEW] Fix kwargs['migration'] KeyError in @errors_out_migration

2015-04-17 Thread Christine Wang
Public bug reported:

This is similar to bug #1423952.
We need to handle that 'migration' can be in args or kwargs.
It should follow the same fix for bug 1423952.

This fixes the KeyError in the decorator by normalizing the args/kwargs
list into a single dict that we can pull the migration from.

2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/exception.py, line 88, in wrapped
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher payload)
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 85, in __exit__
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/exception.py, line 71, in wrapped
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 328, in 
decorated_function
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance_uuid=instance_uuid)
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 85, in __exit__
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 299, in 
decorated_function
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 378, in 
decorated_function
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 276, in 
decorated_function
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher migration 
= kwargs['migration']
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher KeyError: 
'migration'
2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher

** Affects: nova
 Importance: Undecided
 Assignee: Christine Wang (ijuwang)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Christine Wang (ijuwang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445674

Title:
  Fix kwargs['migration'] KeyError in @errors_out_migration

Status in OpenStack Compute (Nova):
  New

Bug description:
  This is similar to bug #1423952.
  We need to handle that 'migration' can be in args or kwargs.
  It should follow the same fix for bug 1423952.

  This fixes the KeyError in the decorator by normalizing the args/kwargs
  list into a single dict that we can pull the migration from.

  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/exception.py, line 88, in wrapped
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher payload)
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 85, in __exit__
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/exception.py, line 71, in wrapped
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 328, in 
decorated_function
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance_uuid=instance_uuid)
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 85, in __exit__
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 299, in 
decorated_function
  2015-04-17 04:53:26.418 23056 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-04-17 04:53

[Yahoo-eng-team] [Bug 1445698] [NEW] _update_usage_from_migrations does not handling InstanceNotFound and cause compute service restart to fail

2015-04-17 Thread Christine Wang
Public bug reported:

Due to bug #1445674, Migration object was not set to error if there is a
resize failure.

Later on, if we delete the instance, the Migration object will continue
to exist.

If we were to restart compute service, it will fail to start since
_update_usage_from_migrations

Version:
commit 095e9398ecf69ffdaeb929287d5f5f9a38257361
Merge: 6029860 0e28a5f
Author: Jenkins jenk...@review.openstack.org
Date:   Fri Apr 17 19:47:18 2015 +

Merge fixed tests in test_iptables_network to work with random
PYTHONHASHSE

commit 6029860ffa0f2500505d1894f5bbb9ca717a8232
Merge: 760fba5 5bfe303
Author: Jenkins jenk...@review.openstack.org
Date:   Fri Apr 17 19:46:50 2015 +

Merge refactored tests in test_objects to pass with random
PYTHONHASHSEED

commit 760fba535b2eb17243a39af9fea70e8dbcdbe713
Merge: 1248353 78883fa
[root@ip9-114-195-109 nova]# git log -1
commit 20cb0745550fc6bbd9e789caa7fdbf9669b2d24d
Merge: 095e939 56f355e
Author: Jenkins jenk...@review.openstack.org
Date:   Fri Apr 17 1


2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py, line 445, 
in _update_available_resource
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup 
self._update_usage_from_migrations(context, resources, migrations)
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py, line 708, 
in _update_usage_from_migrations
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup 
instance = migration.instance
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/objects/migration.py, line 80, in 
instance
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup 
return objects.Instance.get_by_uuid(self._context, self.instance_uuid)
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/objects/base.py, line 163, in wrapper
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup 
result = fn(cls, context, *args, **kwargs)
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/objects/instance.py, line 564, in 
get_by_uuid
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup 
use_slave=use_slave)
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/db/api.py, line 651, in 
instance_get_by_uuid
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup 
columns_to_join, use_slave=use_slave)
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 233, in 
wrapper
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup 
return f(*args, **kwargs)
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 1744, in 
instance_get_by_uuid
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup 
columns_to_join=columns_to_join, use_slave=use_slave)
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 1756, in 
_instance_get_by_uuid
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup raise 
exception.InstanceNotFound(instance_id=uuid)
2015-04-17 05:56:58.402 22873 TRACE nova.openstack.common.threadgroup 
InstanceNotFound: Instance b8ddd534-f114-4ea6-9833-eeb64a8bfc49 could not be 
found.


def _update_usage_from_migrations(self, context, resources, migrations):

self.tracked_migrations.clear()

filtered = {}

# do some defensive filtering against bad migrations records in the
# database:
for migration in migrations:
instance = migration.instance   --- 
getting InstanceNotFound

if not instance:
# migration referencing deleted instance
continue

uuid = instance.uuid

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445698

Title:
  _update_usage_from_migrations does not handling InstanceNotFound and
  cause compute service restart to fail

Status in OpenStack Compute (Nova):
  New

Bug description:
  Due to bug #1445674, Migration object was not set to error if there is
  a resize failure.

  Later on, if we delete the instance, the Migration object will
  continue to exist.

  If we were to restart compute service, it will fail to start since
  

[Yahoo-eng-team] [Bug 1431652] [NEW] os-volume_attachments return 500 error code instead of 404 if invalid volume is specified

2015-03-12 Thread Christine Wang
/keystoneclient/session.py, line 363, in 
request
2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack raise 
exceptions.from_response(resp, method, url)
2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack NotFound: Not Found 
(HTTP 404) (Request-ID: req-8d133de9-430e-41ad-819a-3f9685deed29)
2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack
2015-03-12 08:49:19.145 20273 INFO nova.api.openstack 
[req-001f6e6e-4726-4738-a3e7-74c5c7eaaac5 None] 
http://localhost:8774/v2/dd069270f6634cafaf66777c4a2ee137/servers/e44ee780-0b57-4bcb-89ef-ab99e4d7d1a0/os-volume_attachments/volume-815308985
 returned with HTTP 500

** Affects: nova
 Importance: Undecided
 Assignee: Christine Wang (ijuwang)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Christine Wang (ijuwang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431652

Title:
  os-volume_attachments return 500 error code instead of 404 if invalid
  volume is specified

Status in OpenStack Compute (Nova):
  New

Bug description:
  If I do a DELETE of os-volume_attachments with invalid volume, 500
  error code is being returned instead of 404.

  The problem is at volume = self.volume_api.get(context, volume_id)
  where NotFound exception is  not being handled. This problem is fixed
  in v3 API.

  2015-03-12 08:49:19.146 20273 INFO nova.osapi_compute.wsgi.server 
[req-001f6e6e-4726-4738-a3e7-74c5c7eaaac5 None] 9.114.193.249,127.0.0.1 DELETE 
/v2/dd069270f6634cafaf66777c4a2ee137/servers/e44ee780-0b57-4bcb-89ef-ab99e4d7d1a0/os-volume_attachments/volume-815308985
 HTTP/1.1 status: 500 len: 295 time: 0.6408780
  ...
  2015-03-12 08:49:18.969 20273 ERROR nova.api.openstack 
[req-001f6e6e-4726-4738-a3e7-74c5c7eaaac5 None] Caught error: Not Found (HTTP 
404) (Request-ID: req-8d133de9-430e-41ad-819a-3f9685deed29)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack Traceback (most recent 
call last):
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/nova/api/openstack/__init__.py, line 125, in 
__call__
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
req.get_response(self.application)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/webob/request.py, line 1296, in send
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/webob/request.py, line 1260, in 
call_application
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/webob/dec.py, line 144, in __call__
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token.py, line 748, 
in __call__
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token.py, line 684, 
in _call_app
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/webob/dec.py, line 144, in __call__
  ...
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/nova/api/openstack/compute/contrib/volumes.py,
 line 398, in delete
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack volume = 
self.volume_api.get(context, volume_id)
  ...
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack item = 
cinder.cinderclient(context).volumes.get(volume_id)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py, line 227, in get
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
self._get(/volumes/%s % volume_id, volume)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/cinderclient/base.py, line 149, in _get
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack resp, body = 
self.api.client.get(url)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/cinderclient/client.py, line 88, in get
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
self._cs_request(url, 'GET', **kwargs)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/cinderclient/client.py, line 85, in 
_cs_request
  2015-03-12 08:49:18.969 20273 TRACE

[Yahoo-eng-team] [Bug 1404268] [NEW] Missing nova context during spawn

2014-12-19 Thread Christine Wang
Public bug reported:

The nova request context tracks a security context and other request
information, including a request id that is added to log entries
associated with this request.  The request context is passed around
explicitly in many chunks of OpenStack code.  But nova/context.py also
stores the RequestContext in the thread's local store (when the
RequestContext is created, or when it is explicitly stored through a
call to update_store).  The nova logger will use an explicitly passed
context, or look for it in the local.store.

A recent change in community openstack code has resulted in the context
not being set for many nova log messages during spawn:

https://bugs.launchpad.net/neutron/+bug/1372049

This change spawns a new thread in nova/compute/manager.py
build_and_run_instance, and the spawn runs in that new thread.  When the
original RPC thread created the nova RequestContext, the context was set
in the thread's local store.  But the context does not get set in the
newly-spawned thread.

Example of log messages with missing req id during spawn:

014-12-13 22:20:30.987 18219 DEBUG nova.openstack.common.lockutils [-] Acquired 
semaphore 87c7fc32-042e-40b7-af46-44bff50fa1b4 lock 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:229
2014-12-13 22:20:30.987 18219 DEBUG nova.openstack.common.lockutils [-] Got 
semaphore / lock _locked_do_build_and_run_instance inner 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:271
2014-12-13 22:20:31.012 18219 AUDIT nova.compute.manager 
[req-bd959d69-86de-4eea-ae1d-a066843ca317 None] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] Starting instance...
...
2014-12-13 22:20:31.280 18219 DEBUG nova.openstack.common.lockutils [-] Created 
new semaphore compute_resources internal_lock 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:206
2014-12-13 22:20:31.281 18219 DEBUG nova.openstack.common.lockutils [-] 
Acquired semaphore compute_resources lock 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:229
2014-12-13 22:20:31.282 18219 DEBUG nova.openstack.common.lockutils [-] Got 
semaphore / lock instance_claim inner 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:271
2014-12-13 22:20:31.284 18219 DEBUG nova.compute.resource_tracker [-] Memory 
overhead for 512 MB instance; 0 MB instance_claim 
/usr/lib/python2.6/site-packages/nova/compute/resource_tracker.py:1272014-12-13 
22:20:31.290 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] Attempting claim: memory 512 MB, disk 10 
GB2014-12-13 22:20:31.292 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] Total memory: 131072 MB, used: 12288.00 
MB2014-12-13 22:20:31.296 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] memory limit not specified, defaulting to 
unlimited2014-12-13 22:20:31.300 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] Total disk: 2097152 GB, used: 60.00 
GB2014-12-13 22:20:31.304 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] disk limit not specified, defaulting to 
unlimited
...

2014-12-13 22:20:32.850 18219 DEBUG nova.network.neutronv2.api [-]
[instance: 87c7fc32-042e-40b7-af46-44bff50fa1b4] get_instance_nw_info()
_get_instance_nw_info /usr/lib/python2.6/site-
packages/nova/network/neutronv2/api.py:611

Proposed patch:

one new line of code at the beginning of nova/compute/manager.py
_do_build_and_run_instance:

context.update_store()

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404268

Title:
  Missing nova context during spawn

Status in OpenStack Compute (Nova):
  New

Bug description:
  The nova request context tracks a security context and other request
  information, including a request id that is added to log entries
  associated with this request.  The request context is passed around
  explicitly in many chunks of OpenStack code.  But nova/context.py also
  stores the RequestContext in the thread's local store (when the
  RequestContext is created, or when it is explicitly stored through a
  call to update_store).  The nova logger will use an explicitly passed
  context, or look for it in the local.store.

  A recent change in community openstack code has resulted in the
  context not being set for many nova log messages during spawn:

  https://bugs.launchpad.net/neutron/+bug/1372049

  This change spawns a new thread in nova/compute/manager.py
  build_and_run_instance, and the spawn runs in that new thread.  When
  the original RPC thread created the nova RequestContext, the context
  was set in the thread's local store.  But the context does not get set
  in the newly-spawned thread.

  Example of log 

[Yahoo-eng-team] [Bug 1389899] [NEW] nova delete shouldn't remove instance from DB if host is not up

2014-11-05 Thread Christine Wang
Public bug reported:

Under nova/compute/api.py, it will delete instance from DB if compute
node is not up. I think we should utilize force-delete to handle compute
node is not up scenario. So, if compute node is not up, only force-
delete can delete the instance.

Code flow:
delete - _delete_instance - _delete

_delete(...) code snippet: 
..
if not is_up:
# If compute node isn't up, just delete from DB
self._local_delete(context, instance, bdms, delete_type, cb)
quotas.commit()

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1389899

Title:
  nova delete shouldn't remove instance from DB if host is not up

Status in OpenStack Compute (Nova):
  New

Bug description:
  Under nova/compute/api.py, it will delete instance from DB if compute
  node is not up. I think we should utilize force-delete to handle
  compute node is not up scenario. So, if compute node is not up, only
  force-delete can delete the instance.

  Code flow:
  delete - _delete_instance - _delete

  _delete(...) code snippet: 
  ..
  if not is_up:
  # If compute node isn't up, just delete from DB
  self._local_delete(context, instance, bdms, delete_type, cb)
  quotas.commit()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1389899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp