[Yahoo-eng-team] [Bug 1680254] Re: VMware: Failed at boot instance from volume

2017-08-25 Thread Vipin Balachandran
*** This bug is a duplicate of bug 1712281 ***
https://bugs.launchpad.net/bugs/1712281

** This bug has been marked a duplicate of bug 1712281
   Attach volume failed for vmware driver

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1680254

Title:
  VMware: Failed at boot instance from volume

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Instance is failing with booting from volume

   [req-e1b1ac9c-3394-44c6-8453-b05f41f2aa16 admin admin]  Instance failed to 
spawn
    Traceback (most recent call last):
  File "/opt/stack/nova/nova/compute/manager.py", line 2120, in 
_build_resources
    yield resources
  File "/opt/stack/nova/nova/compute/manager.py", line 1925, in 
_build_and_run_instance
    block_device_info=block_device_info)
  File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 324, in spawn
    admin_password, network_info, block_device_info)
  File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 791, in spawn
    instance, vi.datastore.ref, adapter_type)
  File "/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 606, in 
attach_root_volume
    self.attach_volume(connection_info, instance, adapter_type)
  File "/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 383, in 
attach_volume
    self._attach_volume_vmdk(connection_info, instance, adapter_type)
  File "/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 337, in 
_attach_volume_vmdk
    if state.lower() != 'poweredoff':
    AttributeError: 'int' object has no attribute 'lower'

  Because in nova/compute/power_state.py, all states are in index, so
  state should be int not str. So the driver code need to be updated
  accordingly.

  Command to reproduce in devstack(all repos are in master level):
  nova boot --nic net-id= --flavor  --block-device 
source=image,id=,dest=volume,size=2,shutdown=preserve,bootindex=0 
mystanceFromVolume1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1680254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1704010] Re: VMware: attach volume fails with AttributeError

2017-08-25 Thread Vipin Balachandran
*** This bug is a duplicate of bug 1712281 ***
https://bugs.launchpad.net/bugs/1712281

** This bug is no longer a duplicate of bug 1680254
   VMware: Failed at boot instance from volume
** This bug has been marked a duplicate of bug 1712281
   Attach volume failed for vmware driver

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1704010

Title:
  VMware: attach volume fails with AttributeError

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Attaching/detaching volume with adapter type IDE fails with:
  AttributeError: 'int' object has no attribute 'lower'

  2017-07-11 23:20:11.876 ERROR nova.virt.block_device 
[req-15c66739-f62f-405d-ad71-e9e46dfeea88 demo admin] [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] Driver failed to attach volume 
7f94ea59-510c-4c9e-bf5b-9accc59a7a54 at /dev/sdc: AttributeError: 'int' object 
has no attribute 'lower'
  2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] Traceback (most recent call last):
  2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 389, in attach
  2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] device_type=self['device_type'], 
encryption=encryption)
  2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 328, in attach_volume
  2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] return 
self._volumeops.attach_volume(connection_info, instance)
  2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 381, in attach_volume
  2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] 
self._attach_volume_vmdk(connection_info, instance, adapter_type)
  2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 335, in 
_attach_volume_vmdk
  2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] if state.lower() != 'poweredoff':
  2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] AttributeError: 'int' object has no 
attribute 'lower'
  2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] 

  2017-07-11 23:20:56.655 ERROR nova.virt.block_device 
[req-d985d896-119c-40e4-868e-677dbf461df1 demo admin] [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] Failed to detach volume 
af84764d-dd81-4108-8d2f-b39cedeb9aa2 from /dev/sdb: AttributeError: 'int' 
object has no attribute 'lower'
  2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] Traceback (most recent call last):
  2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 277, in driver_detach
  2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] encryption=encryption)
  2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 333, in detach_volume
  2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] return 
self._volumeops.detach_volume(connection_info, instance)
  2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 582, in detach_volume
  2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] 
self._detach_volume_vmdk(connection_info, instance)
  2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 535, in 
_detach_volume_vmdk
  2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] if state.lower() != 'poweredoff':
  2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] AttributeError: 'int' object has no 
attribute 'lower'
  2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1704010] [NEW] VMware: attach volume fails with AttributeError

2017-07-12 Thread Vipin Balachandran
Public bug reported:

Attaching/detaching volume with adapter type IDE fails with:
AttributeError: 'int' object has no attribute 'lower'

2017-07-11 23:20:11.876 ERROR nova.virt.block_device 
[req-15c66739-f62f-405d-ad71-e9e46dfeea88 demo admin] [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] Driver failed to attach volume 
7f94ea59-510c-4c9e-bf5b-9accc59a7a54 at /dev/sdc: AttributeError: 'int' object 
has no attribute 'lower'
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] Traceback (most recent call last):
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 389, in attach
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] device_type=self['device_type'], 
encryption=encryption)
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 328, in attach_volume
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] return 
self._volumeops.attach_volume(connection_info, instance)
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 381, in attach_volume
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] 
self._attach_volume_vmdk(connection_info, instance, adapter_type)
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 335, in 
_attach_volume_vmdk
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] if state.lower() != 'poweredoff':
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] AttributeError: 'int' object has no 
attribute 'lower'
2017-07-11 23:20:11.876 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] 

2017-07-11 23:20:56.655 ERROR nova.virt.block_device 
[req-d985d896-119c-40e4-868e-677dbf461df1 demo admin] [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] Failed to detach volume 
af84764d-dd81-4108-8d2f-b39cedeb9aa2 from /dev/sdb: AttributeError: 'int' 
object has no attribute 'lower'
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] Traceback (most recent call last):
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 277, in driver_detach
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] encryption=encryption)
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 333, in detach_volume
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] return 
self._volumeops.detach_volume(connection_info, instance)
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 582, in detach_volume
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] 
self._detach_volume_vmdk(connection_info, instance)
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]   File 
"/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 535, in 
_detach_volume_vmdk
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] if state.lower() != 'poweredoff':
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835] AttributeError: 'int' object has no 
attribute 'lower'
2017-07-11 23:20:56.655 TRACE nova.virt.block_device [instance: 
c5a35862-98c2-4b01-9259-fd250c0af835]

** Affects: nova
 Importance: Undecided
 Assignee: Vipin Balachandran (vbala)
 Status: New


** Tags: vmware

** Changed in: nova
 Assignee: (unassigned) => Vipin Balachandran (vbala)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1704010

Title:
  VMware: attach volume fails with AttributeError

Status in OpenStack Compute (nova):
  New

Bug description:
  Attaching/detaching volume with adapter type IDE fails with:
  AttributeError: 'int' object has no attribute 'lower'

 

[Yahoo-eng-team] [Bug 1522232] Re: error occurs during attacthing volumes when there is no slots in SCSi controllers

2015-12-03 Thread Vipin Balachandran
Volume vmdk is attached to the instance in the Nova driver.

** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1522232

Title:
  error occurs during attacthing volumes when there is no slots in SCSi
  controllers

Status in OpenStack Compute (nova):
  New

Bug description:
  There can be at most 15 virtual disk attached on an SCSi controller,
  so if the limitation is reached, a new controller should be
  automatically created.

  In nova/virt/vmwareapi/vm_util.py:

  def allocate_controller_key_and_unit_number(client_factory, devices,
  adapter_type):
  """This function inspects the current set of hardware devices and returns
  controller_key and unit_number that can be used for attaching a new 
virtual
  disk to adapter with the given adapter_type.
  """
  if devices.__class__.__name__ == "ArrayOfVirtualDevice":
  devices = devices.VirtualDevice

  taken = _find_allocated_slots(devices)

  ret = None
  if adapter_type == 'ide':
  ide_keys = [dev.key for dev in devices if _is_ide_controller(dev)]
  ret = _find_controller_slot(ide_keys, taken, 2)
  elif adapter_type in ['lsiLogic', 'lsiLogicsas', 
'busLogic','paraVirtual']:
  scsi_keys = [dev.key for dev in devices if _is_scsi_controller(dev)]
  ret = _find_controller_slot(scsi_keys, taken, 16)
  if ret:
  return ret[0], ret[1], None

  # create new controller with the specified type and return its spec
  controller_key = -101
  controller_spec = create_controller_spec(client_factory, controller_key,
   adapter_type)
  return controller_key, 0, controller_spec

  Here we can see, if 'ret' is None, a 'create_controller_spec' is generated 
for the creation of a new controller.
  I tested this function, and I check the value of 
'vmdk_attach_config_spec.deviceChange'
  [(VirtualDeviceConfigSpec){
     dynamicType = None
     dynamicProperty[] = 
     operation = "add"
     fileOperation = "create"
     device =
    (VirtualDisk){
   dynamicType = None
   dynamicProperty[] = 
   key = -100
   deviceInfo =
  (Description){
     dynamicType = None
     dynamicProperty[] = 
     label = None
     summary = None
  }
   backing =
  (VirtualDiskRawDiskMappingVer1BackingInfo){
     dynamicType = None
     dynamicProperty[] = 
     fileName = ""
     datastore =
    (ManagedObjectReference){
   value = None
   _type = ""
    }
     backingObjectId = None
     lunUuid = None
     deviceName = 
"/vmfs/devices/disks/t10.IET_0011"
     compatibilityMode = "physicalMode"
     diskMode = "independent_persistent"
     uuid = None
     contentId = None
     changeId = None
     parent =
    (VirtualDiskRawDiskMappingVer1BackingInfo){
   dynamicType = None
   dynamicProperty[] = 
   fileName = None
   datastore =
  (ManagedObjectReference){
     value = None
     _type = ""
  }
   backingObjectId = None
   lunUuid = None
   deviceName = None
   compatibilityMode = None
   diskMode = None
   uuid = None
   contentId = None
   changeId = None
    }
  }
   connectable =
  (VirtualDeviceConnectInfo){
     dynamicType = None
     dynamicProperty[] = 
     startConnected = True
     allowGuestControl = False
     connected = True
     status = None
  }
   slotInfo =
  (VirtualDeviceBusSlotInfo){
     dynamicType = None
     dynamicProperty[] = 
  }
   controllerKey = -101
   unitNumber = 0
   capacityInKB = 0
   capacityInBytes = None
   shares =
  (SharesInfo){
     dynamicType = None
     dynamicProperty[] = 
     shares = None
     level =
    (SharesLevel){
   value = None
    }
  }
   storageIOAllocation =
  

[Yahoo-eng-team] [Bug 1514910] [NEW] VMware: Unable to detach volume after detach failure

2015-11-10 Thread Vipin Balachandran
Public bug reported:

If volume detach fails after reconfiguring the VM to remove volume vmdk
then it will not be possible to detach the volume again. Subsequent
detach will fail with StorageError.

2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 366, in decorated_function
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 4760, in detach_volume
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher 
self._detach_volume(context, volume_id, instance)
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 4743, in _detach_volume
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher 
self._driver_detach_volume(context, instance, bdm)
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 4698, in _driver_detach_volume
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher 
self.volume_api.roll_detaching(context, volume_id)
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
__exit__
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 4686, in _driver_detach_volume
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher 
encryption=encryption)
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 400, in detach_volume
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher return 
self._volumeops.detach_volume(connection_info, instance)
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 581, in detach_volume
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher 
self._detach_volume_vmdk(connection_info, instance)
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 526, in 
_detach_volume_vmdk
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher device = 
self._get_vmdk_backed_disk_device(vm_ref, data)
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 514, in 
_get_vmdk_backed_disk_device
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher raise 
exception.StorageError(reason=_("Unable to find volume"))
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher StorageError: 
Storage error: Unable to find volume
2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher

** Affects: nova
 Importance: Undecided
 Assignee: Vipin Balachandran (vbala)
     Status: New


** Tags: vmware volumes

** Changed in: nova
 Assignee: (unassigned) => Vipin Balachandran (vbala)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1514910

Title:
  VMware: Unable to detach volume after detach failure

Status in OpenStack Compute (nova):
  New

Bug description:
  If volume detach fails after reconfiguring the VM to remove volume
  vmdk then it will not be possible to detach the volume again.
  Subsequent detach will fail with StorageError.

  2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 366, in decorated_function
  2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 4760, in detach_volume
  2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher 
self._detach_volume(context, volume_id, instance)
  2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 4743, in _detach_volume
  2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher 
self._driver_detach_volume(context, instance, bdm)
  2015-11-10 14:34:52.989 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 4698, in _driv

[Yahoo-eng-team] [Bug 1460044] [NEW] Data loss can occur if cinder attach fails

2015-05-29 Thread Vipin Balachandran
Public bug reported:

Driver detach is not called while handling failure during Cinder's
attach API. This can result in volume data loss for VMware driver since
during driver attach, the instance VM is reconfigured with volume's
vmdk. Subsequent delete of instance will delete the volume's vmdk since
the instance is not reconfigured to remove the volume's vmdk even after
attach failure.

** Affects: nova
 Importance: Undecided
 Assignee: Vipin Balachandran (vbala)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Vipin Balachandran (vbala)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1460044

Title:
  Data loss can occur if cinder attach fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  Driver detach is not called while handling failure during Cinder's
  attach API. This can result in volume data loss for VMware driver
  since during driver attach, the instance VM is reconfigured with
  volume's vmdk. Subsequent delete of instance will delete the volume's
  vmdk since the instance is not reconfigured to remove the volume's
  vmdk even after attach failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1460044/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276207] Re: vmware driver does not validate server certificates

2015-05-04 Thread Vipin Balachandran
** Changed in: cinder
   Status: Fix Released = Confirmed

** Changed in: cinder
 Assignee: Johnson koil raj (jjohnsonkoilraj) = Vipin Balachandran (vbala)

** Changed in: cinder
   Importance: Low = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276207

Title:
  vmware driver does not validate server certificates

Status in Cinder:
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo VMware library for OpenStack projects:
  Fix Released

Bug description:
  The VMware driver establishes connections to vCenter over HTTPS, yet
  the vCenter server certificate is not verified as part of the
  connection process.  I know this because my vCenter server is using a
  self-signed certificate which always fails certification verification.
  As a result, someone could use a man-in-the-middle attack to spoof the
  vcenter host to nova.

  The vmware driver has a dependency on Suds, which I believe also does
  not validate certificates because hartsock and I noticed it uses
  urllib.

  For reference, here is a link on secure connections in OpenStack:
  https://wiki.openstack.org/wiki/SecureClientConnections

  Assuming Suds is fixed to provide an option for certificate
  verification, next step would be to modify the vmware driver to
  provide an option to override invalid certificates (such as self-
  signed).  In other parts of OpenStack, there are options to bypass the
  certificate check with a insecure option set, or you could put the
  server's certificate in the CA store.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1276207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439168] [NEW] VMware: Attached volume vmdk not removed before instance delete

2015-04-01 Thread Vipin Balachandran
Public bug reported:

During instance delete, the VM is not reconfigured to detach cinder
volume's vmdk, which can result in data-loss.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439168

Title:
  VMware: Attached volume vmdk not removed before instance delete

Status in OpenStack Compute (Nova):
  New

Bug description:
  During instance delete, the VM is not reconfigured to detach cinder
  volume's vmdk, which can result in data-loss.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439168/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1425502] [NEW] VMware: Detaching volume fails with FileNotFoundException

2015-02-25 Thread Vipin Balachandran
 oslo_messaging.rpc.dispatcher 
encryption=encryption)
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 487, in detach_volume
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher instance)
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/virt/vmwareapi/volumeops.py, line 539, in detach_volume
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher 
self._detach_volume_vmdk(connection_info, instance)
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/virt/vmwareapi/volumeops.py, line 504, in 
_detach_volume_vmdk
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher 
disk_type=vmdk.disk_type)
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/virt/vmwareapi/volumeops.py, line 455, in 
_consolidate_vmdk_volume
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher 
self._relocate_vmdk_volume(volume_ref, res_pool, datastore, host)
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/virt/vmwareapi/volumeops.py, line 384, in 
_relocate_vmdk_volume
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher 
self._session._wait_for_task(task)
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 673, in _wait_for_task
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher return 
self.wait_for_task(task_ref)
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/oslo.vmware/oslo_vmware/api.py, line 380, in wait_for_task
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher return 
evt.wait()
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 121, in wait
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher return 
hubs.get_hub().switch()
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 294, in 
switch
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher return 
self.greenlet.switch()
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/oslo.vmware/oslo_vmware/common/loopingcall.py, line 76, in _inner
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher 
self.f(*self.args, **self.kw)
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/oslo.vmware/oslo_vmware/api.py, line 417, in _poll_task
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher raise task_ex
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher 
FileNotFoundException: File 
/vmfs/volumes/54ae58ae-5db2e67f-38b1-0200066a0432/volume-dc966ab1-9d8b-4e3b-a3b4-7b610a9b0f19/volume-dc966ab1-9d8b-4e3b-a3b4-7b610a9b0f19.vmdk
 was not found
2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher

** Affects: nova
 Importance: Undecided
 Assignee: Vipin Balachandran (vbala)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Vipin Balachandran (vbala)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1425502

Title:
  VMware: Detaching volume fails with FileNotFoundException

Status in OpenStack Compute (Nova):
  New

Bug description:
  Steps to reproduce:

  a) Create nova instance vm-1 (assume that the datastore is a shared datastore 
'ds-1' which is a member of datastore cluster dcls-1)
  b) Create volume vol-1 with a storage profile which maps to ds-1
  c) Attach vol-1 to vm-1
  d) Migrate vm-1's datastore to shared datastore ds-2 which is a member of the 
dcls-1
  (The vmdk of vol-1 *moves* to vm-1's new location after this step)
  e) Detach vol-1

  
  2015-02-25 18:05:00.014 ERROR oslo_messaging.rpc.dispatcher 
[req-124c1c0a-2a00-46b1-a790-0a2e5d6da871 admin demo] Exception during message 
handling: File 
/vmfs/volumes/54ae58ae-5db2e67f-38b1-0200066a0432/volume-dc966ab1-9d8b-4e3b-a3b4-7b610a9b0f19/volume-dc966ab1-9d8b-4e3b-a3b4-7b610a9b0f19.vmdk
 was not found
  2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
  2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
142, in _dispatch_and_reply
  2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
186, in _dispatch
  2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-02-25 18:05:00.014 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc

[Yahoo-eng-team] [Bug 1425486] [NEW] VMware: Detach volume fails with Unable to access the virtual machine configuration: Unable to access file

2015-02-25 Thread Vipin Balachandran
Public bug reported:

Steps to reproduce:

* Create volume vol-1 
* Create instance vm-1
* Attach vol-1 to vm-1

Assuming both are created in local datastore ds-1 accessible to esx host
h-1

* Migrate vm-1 to a different esx host h-2 and datastore ds-2
* Now detach vol-1

** Affects: nova
 Importance: Undecided
 Assignee: Vipin Balachandran (vbala)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Vipin Balachandran (vbala)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1425486

Title:
  VMware: Detach volume fails with Unable to access the virtual machine
  configuration: Unable to access file

Status in OpenStack Compute (Nova):
  New

Bug description:
  Steps to reproduce:

  * Create volume vol-1 
  * Create instance vm-1
  * Attach vol-1 to vm-1

  Assuming both are created in local datastore ds-1 accessible to esx
  host h-1

  * Migrate vm-1 to a different esx host h-2 and datastore ds-2
  * Now detach vol-1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1425486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416000] Re: VMware: write error lost while transferring volume

2015-02-18 Thread Vipin Balachandran
** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1416000

Title:
  VMware: write error lost while transferring volume

Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo VMware library for OpenStack projects:
  Confirmed

Bug description:
  I'm running the following command:

  cinder create --image-id a24f216f-9746-418e-97f9-aebd7fa0e25f 1

  The write side of the data transfer (a VMwareHTTPWriteFile object)
  returns an error in write() which I haven't debugged, yet. However,
  this error is never reported to the user, although it does show up in
  the logs. The effect is that the transfer sits in the 'downloading'
  state until the 7200 second timeout, when it reports the timeout.

  The reason is that the code which waits on transfer completion (in
  start_transfer) does:

  try:
  # Wait on the read and write events to signal their end
  read_event.wait()
  write_event.wait()
  except (timeout.Timeout, Exception) as exc:
  ...

  That is, it waits for the read thread to signal completion via
  read_event before checking write_event. However, because write_thread
  has died, read_thread is blocking and will never signal completion.
  You can demonstrate this by swapping the order. If you want for write
  first it will die immediately, which is what you want. However, that's
  not right either because now you're missing read errors.

  Ideally this code needs to be able to notice an error at either end
  and stop immediately.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1416000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419474] [NEW] VMware: Volume attach fails with VMwareDriverException: A specified parameter was not correct.

2015-02-08 Thread Vipin Balachandran
, in 
switch
2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591]   File 
/usr/local/lib/python2.7/dist-packages/oslo/vmware/common/loopingcall.py, 
line 76, in _inner
2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] self.f(*self.args, **self.kw)
2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591]   File 
/usr/local/lib/python2.7/dist-packages/oslo/vmware/api.py, line 424, in 
_poll_task
2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] raise task_ex
2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] VMwareDriverException: A specified 
parameter was not correct.
2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] 
config.extraConfig[volume-e375857b-5b8f-409a-9303-db6d33956fe1]
2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591]
2015-02-04 14:30:21.100 DEBUG nova.volume.cinder 
[req-737b0e79-f91d-41b4-96f0-7449251545b9 admin demo] Cinderclient connection 
created using URL: 
http://192.168.200.151:8776/v1/4c54a363542f4d899dd48c0d38e46475 from 
(pid=21418) cinderclient /opt/stack/nova/nova/volume/cinder.py:93
2015-02-04 14:30:21.167 ERROR nova.compute.manager 
[req-737b0e79-f91d-41b4-96f0-7449251545b9 admin demo] [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] Failed to attach 
e375857b-5b8f-409a-9303-db6d33956fe1 at /dev/sdb

** Affects: nova
 Importance: Undecided
 Assignee: Vipin Balachandran (vbala)
 Status: New


** Tags: vmware

** Changed in: nova
 Assignee: (unassigned) = Vipin Balachandran (vbala)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1419474

Title:
  VMware: Volume attach fails with VMwareDriverException: A specified
  parameter was not correct.

Status in OpenStack Compute (Nova):
  New

Bug description:
  Steps to reproduce:
  * Attach vol-1 to vm-1
  * Change host and datastore of vm-1
  * Detach vol-1 from vm-1
  * Attach vol-1 to vm-1

  
  2015-02-04 14:30:21.098 DEBUG oslo.vmware.exceptions [-] Fault 
InvalidArgument not matched. from (pid=21418) get_fault_class 
/usr/local/lib/python2.7/dist-packages/oslo/vmware/exceptions.py:249
  2015-02-04 14:30:21.098 ERROR oslo.vmware.common.loopingcall [-] in fixed 
duration looping call
  2015-02-04 14:30:21.098 TRACE oslo.vmware.common.loopingcall Traceback (most 
recent call last):
  2015-02-04 14:30:21.098 TRACE oslo.vmware.common.loopingcall   File 
/usr/local/lib/python2.7/dist-packages/oslo/vmware/common/loopingcall.py, 
line 76, in _inner
  2015-02-04 14:30:21.098 TRACE oslo.vmware.common.loopingcall 
self.f(*self.args, **self.kw)
  2015-02-04 14:30:21.098 TRACE oslo.vmware.common.loopingcall   File 
/usr/local/lib/python2.7/dist-packages/oslo/vmware/api.py, line 424, in 
_poll_task
  2015-02-04 14:30:21.098 TRACE oslo.vmware.common.loopingcall raise task_ex
  2015-02-04 14:30:21.098 TRACE oslo.vmware.common.loopingcall 
VMwareDriverException: A specified parameter was not correct.
  2015-02-04 14:30:21.098 TRACE oslo.vmware.common.loopingcall 
config.extraConfig[volume-e375857b-5b8f-409a-9303-db6d33956fe1]
  2015-02-04 14:30:21.098 TRACE oslo.vmware.common.loopingcall
  2015-02-04 14:30:21.099 ERROR nova.virt.block_device 
[req-737b0e79-f91d-41b4-96f0-7449251545b9 admin demo] [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] Driver failed to attach volume 
e375857b-5b8f-409a-9303-db6d33956fe1 at /dev/sdb
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] Traceback (most recent call last):
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591]   File 
/opt/stack/nova/nova/virt/block_device.py, line 249, in attach
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] device_type=self['device_type'], 
encryption=encryption)
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591]   File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 479, in attach_volume
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] return 
_volumeops.attach_volume(connection_info, instance)
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591]   File 
/opt/stack/nova/nova/virt/vmwareapi/volumeops.py, line 427, in attach_volume
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] 
self._attach_volume_vmdk(connection_info, instance)
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device

[Yahoo-eng-team] [Bug 1226543] Re: VMware: attaching a volume to the VM failed

2014-07-16 Thread Vipin Balachandran
This is not a valid cinder bug. Nova vmware driver has to create or use
an existing controller in the instance VM to attach the volume and it is
unrelated to the controller in volume backing. Please re-open this if
you disagree.

** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1226543

Title:
  VMware: attaching a volume to the VM failed

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  I was using the image trend-tinyvm1-flat.vmdk. When I boot an VM and
  attach a volume to it. The following error message was shown:

  n-cpu==

  2013-09-17 06:47:05.669 ERROR nova.openstack.common.rpc.amqp 
[req-fc4e0a9e-6bd5-45ac-b841-c89e17394603 admin demo] Exception during me
  ssage handling
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp Traceback (most 
recent call last):
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py, line 461, in 
  _process_data
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp **args)
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py, line 17
  2, in dispatch
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp result = 
getattr(proxyobj, method)(ctxt, **kwargs)
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/exception.py, line 89, in wrapped
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp payload)
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/exception.py, line 73, in wrapped
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 243, in decorated_
  function
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp pass
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 229, in decorated_function
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 271, in decorated_function
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 258, in decorated_function
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 3577, in attach_volume
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp context, 
instance, mountpoint)
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 3572, in attach_volume
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp mountpoint, 
instance)
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 3614, in _attach_volume
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp connector)
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 3604, in _attach_volume
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp mountpoint)
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 290, in attach_volume
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp mountpoint)
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/virt/vmwareapi/volumeops.py, line 272, in attach_volume
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp 
self._attach_volume_vmdk(connection_info, instance, mountpoint)
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/virt/vmwareapi/volumeops.py, line 222, in 
_attach_volume_vmdk
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp 
unit_number=unit_number)
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/virt/vmwareapi/volumeops.py, line 71, in 
attach_disk_to_vm
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp 
self._session._wait_for_task(instance_uuid, reconfig_task)
  2013-09-17 06:47:05.669 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 779, in 

[Yahoo-eng-team] [Bug 1289627] Re: VMware NoPermission faults do not log what permission was missing

2014-07-07 Thread Vipin Balachandran
Released in oslo.vmware 0.3.

** Changed in: oslo.vmware
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1289627

Title:
  VMware NoPermission faults do not log what permission was missing

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in Oslo VMware library for OpenStack projects:
  Fix Released

Bug description:
  NoPermission object has a privilegeId that tells us which permission
  the user did not have. Presently the VMware nova driver does not log
  this data. This is very useful for debugging user permissions problems
  on vCenter or ESX.

  
http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.fault.NoPermission.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1289627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1324036] Re: Can't add authenticated iscsi volume to a vmware instance

2014-06-05 Thread Vipin Balachandran
This is unrelated to vmdk driver.

** Changed in: cinder
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1324036

Title:
  Can't add authenticated iscsi volume to a vmware instance

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  The VMware driver doesn't pass volume authentication information to
  the hba when attaching an iscsi volume. Consequently, adding an iscsi
  volume which requires authentication will always fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1324036/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289397] Re: vmware: nova instance delete - show status error

2014-03-10 Thread Vipin Balachandran
** Changed in: cinder
 Assignee: Vipin Balachandran (vbala) = (unassigned)

** Project changed: cinder = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1289397

Title:
  vmware:  nova  instance delete - show status error

Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  ssatya@devstack:~$ nova boot --image 1e95fe6b-cec6-4420-97d1-1e7bc8c81c49 
--flavor 1  testdummay
  
+--+---+
  | Property | Value
 |
  
+--+---+
  | OS-DCF:diskConfig| MANUAL   
 |
  | OS-EXT-AZ:availability_zone  | nova 
 |
  | OS-EXT-STS:power_state   | 0
 |
  | OS-EXT-STS:task_state| networking   
 |
  | OS-EXT-STS:vm_state  | building 
 |
  | OS-SRV-USG:launched_at   | -
 |
  | OS-SRV-USG:terminated_at | -
 |
  | accessIPv4   |  
 |
  | accessIPv6   |  
 |
  | adminPass| fK8SPGtHLUds 
 |
  | config_drive |  
 |
  | created  | 2014-03-07T14:33:49Z 
 |
  | flavor   | m1.tiny (1)  
 |
  | hostId   | 
2c1ae30aa2a235d9c0c8b04aae3f4199cd98356e44a03b5c8f878adb  |
  | id   | eae503d9-c6f7-4e3e-9adc-0b8b6803c90e 
 |
  | image| debian-2.6.32-i686 
(1e95fe6b-cec6-4420-97d1-1e7bc8c81c49) |
  | key_name | -
 |
  | metadata | {}   
 |
  | name | testdummay   
 |
  | os-extended-volumes:volumes_attached | []   
 |
  | progress | 0
 |
  | security_groups  | default  
 |
  | status   | BUILD
 |
  | tenant_id| 209ab7e4f3744675924212805db3ad74 
 |
  | updated  | 2014-03-07T14:33:50Z 
 |
  | user_id  | f3756a4910054883b84ee15acc15fbd1 
 |
  
+--+---+
  ssatya@devstack:~$ nova list
  
+--++++-+--+
  | ID   | Name   | Status | Task State | 
Power State | Networks |
  
+--++++-+--+
  | eae503d9-c6f7-4e3e-9adc-0b8b6803c90e | testdummay | BUILD  | spawning   | 
NOSTATE |  |
  | d1e982c4-85c2-422d-b046-1643bd81e674 | testvm1| ERROR  | deleting   | 
Shutdown| private=10.0.0.2 |
  
+--++++-+--+
  ssatya@devstack:~$ nova list
  
+--++++-+--+
  | ID   | Name   | Status | Task State | 
Power State | Networks |
  
+--++++-+--+
  | eae503d9-c6f7-4e3e-9adc-0b8b6803c90e | testdummay | ACTIVE | -  | 
Running | private=10.0.0.3 |
  | d1e982c4-85c2-422d-b046-1643bd81e674 | testvm1| ERROR  | deleting   | 
Shutdown| private=10.0.0.2