[Yahoo-eng-team] [Bug 1425178] [NEW] novncproxy refactoring removes configurability of host and port

2015-02-24 Thread Loganathan Parthipan
Public bug reported:

The novncproxy.py was refactored and baseproxy was introduced sometime
back in patch https://review.openstack.org/#/c/119396/.

But because the config files are not resolved in novncproxy anymore, the
baseproxy is passed with default values for novncproxy_host and
novncproxy_port. Overrides in config files such as nova.conf will not
have any effects.

This is not good if you want to make novnc listen on different
interfaces/port, a specific use-case being stunnel TLS termination.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1425178

Title:
  novncproxy refactoring removes configurability of host and port

Status in OpenStack Compute (Nova):
  New

Bug description:
  The novncproxy.py was refactored and baseproxy was introduced sometime
  back in patch https://review.openstack.org/#/c/119396/.

  But because the config files are not resolved in novncproxy anymore,
  the baseproxy is passed with default values for novncproxy_host and
  novncproxy_port. Overrides in config files such as nova.conf will not
  have any effects.

  This is not good if you want to make novnc listen on different
  interfaces/port, a specific use-case being stunnel TLS termination.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1425178/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274137] Re: Libvirt driver suspend returns before completion of the task

2015-02-06 Thread Loganathan Parthipan
I hadn't reproduced since then. Closing it.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274137

Title:
  Libvirt driver suspend returns before completion of the task

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  The suspend() function in libvirt driver issues a managedsave() call
  and returns immediately. This will result in the compute manager
  setting task_state to None from 'suspending'.

  This is not good. It's possible for a host to be set to reboot
  assuming all VMs are in stable states or other Operations on the VM
  itself may get through for task_state is None.

  It would be good if driver suspend() actually wait for the power_state
  to move from PAUSED to SHUTOFF and then return control to the manager.
  This would make sure there are no inconsistent task_states reported.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274137/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1322702] [NEW] libvirt get_host_capabilities duplicates

2014-05-23 Thread Loganathan Parthipan
Public bug reported:

get_host_capabilities() in libvirt driver seems to have a bug that will
result in duplicated features.

def get_host_capabilities(self):
"""Returns an instance of config.LibvirtConfigCaps representing
   the capabilities of the host.
"""
if not self._caps:
xmlstr = self._conn.getCapabilities()
self._caps = vconfig.LibvirtConfigCaps()
self._caps.parse_str(xmlstr)
if hasattr(libvirt, 'VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES'):
try:
features = self._conn.baselineCPU(
[self._caps.host.cpu.to_xml()],
libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES)
# FIXME(wangpan): the return value of baselineCPU should be
# None or xml string, but libvirt has a bug
# of it from 1.1.2 which is fixed in 1.2.0,
# this -1 checking should be removed later.
if features and features != -1:
self._caps.host.cpu.parse_str(features)
except libvirt.libvirtError as ex:
error_code = ex.get_error_code()
if error_code == libvirt.VIR_ERR_NO_SUPPORT:
LOG.warn(_LW("URI %(uri)s does not support full set"
 " of host capabilities: " "%(error)s"),
 {'uri': self.uri(), 'error': ex})
else:
raise
return self._caps


The _caps.parse_str() is called in sequence for both capabilites and expand 
features. Since capabilities will have certain features in a VM, and these will 
be repeated again in the expand features, the _caps.host.cpu.features will end 
up with duplicated features. This will cause cpu compare to fail later.


(nova)root@overcloud-novacompute0-un6ckrnp5tzl:~# python
Python 2.7.6 (default, Mar 22 2014, 22:59:38) 
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import libvirt
>>> conn = libvirt.open("qemu:///system")
>>> from nova.virt.libvirt import config as vconfig
>>> caps = vconfig.LibvirtConfigCaps()
>>> xmlstr = conn.getCapabilities()
>>> caps.parse_str(xmlstr)
>>> features = conn.baselineCPU([caps.host.cpu.to_xml()], 
>>> libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES)
>>> caps.host.cpu.parse_str(features)
>>> for f in caps.host.cpu.features:
... print f.name
... 
hypervisor
popcnt
hypervisor
popcnt
pni
sse2
sse
fxsr
mmx
pat
cmov
pge
sep
apic
cx8
mce
pae
msr
tsc
pse
de
fpu
>>>

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1322702

Title:
  libvirt get_host_capabilities duplicates

Status in OpenStack Compute (Nova):
  New

Bug description:
  get_host_capabilities() in libvirt driver seems to have a bug that
  will result in duplicated features.

  def get_host_capabilities(self):
  """Returns an instance of config.LibvirtConfigCaps representing
 the capabilities of the host.
  """
  if not self._caps:
  xmlstr = self._conn.getCapabilities()
  self._caps = vconfig.LibvirtConfigCaps()
  self._caps.parse_str(xmlstr)
  if hasattr(libvirt, 'VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES'):
  try:
  features = self._conn.baselineCPU(
  [self._caps.host.cpu.to_xml()],
  libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES)
  # FIXME(wangpan): the return value of baselineCPU should 
be
  # None or xml string, but libvirt has a 
bug
  # of it from 1.1.2 which is fixed in 
1.2.0,
  # this -1 checking should be removed 
later.
  if features and features != -1:
  self._caps.host.cpu.parse_str(features)
  except libvirt.libvirtError as ex:
  error_code = ex.get_error_code()
  if error_code == libvirt.VIR_ERR_NO_SUPPORT:
  LOG.warn(_LW("URI %(uri)s does not support full set"
   " of host capabilities: " "%(error)s"),
   {'uri': self.uri(), 'error': ex})
  else:
  raise
  return self._caps

  
  The _caps.parse_str() is called in sequence for both capabilites and expand 
features. Since capabilities will have certain features in a VM, and these will 
be repeated again in the expand features, the _caps.host.cpu.features will end

[Yahoo-eng-team] [Bug 1319797] [NEW] Restarting destination compute manager during live-migration can cause instance data loss

2014-05-15 Thread Loganathan Parthipan
Public bug reported:

During compute manager startup init_host is called. One of the functions
there is to delete instance data that doesn't belong to this host ie.
_destroy_evacuated_instances. But this function only checks if the local
instance belongs to the host or not. It doesn't check the task_state.

Suppose a live-migration is in progress and the destination compute
manager is restarted, it will find the migrating instance as not
belonging to the host and destroy it. This can result in two outomes:

1. If live-migration is in progress, then the source hypervisor would hang, so 
a rollback is possible to trigger by killing the job.
2. However, if live-migration is completed and the 
post-live-migration-destination is messaged then by the time the compute 
manager gets to processing the message, the instance data would have been 
deleted. Subsequent periodic tasks would only get as far as defining the VM but 
there wouldn't be any disks left.

014-05-08 20:42:33.058 16724 WARNING nova.virt.libvirt.driver [-] Periodic task 
is updating the host stat, it is trying to get disk instance-0002, but disk 
file was removed by concurrent operations such as resize.
2014-05-08 20:43:33.370 16724 WARNING nova.virt.libvirt.driver [-] Periodic 
task is updating the host stat, it is trying to get disk instance-0002, but 
disk file was removed by concurrent operations such as resize.

Steps to reproduce:

1. Start live-migration
2. Wait for pre-live-migration to define the destination VM
3. Restart destination compute manager

To see what happens for case 2 (Live-migration having completed), put a
breakpoint in init_host and delay till instance is running on the
destination and then continue the nova-compute. In this case you'll end
up with the instance directory like this:


ls -l 06ddbe13-577b-4f9f-ac52-0c038aec04d8
total 8
-rw-r--r-- 1 root root   89 May  8 19:59 disk.info
-rw-r--r-- 1 root root 1548 May  8 19:59 libvirt.xml

I verified this in a tripleo devtest environment.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1319797

Title:
  Restarting destination compute manager during live-migration can cause
  instance data loss

Status in OpenStack Compute (Nova):
  New

Bug description:
  During compute manager startup init_host is called. One of the
  functions there is to delete instance data that doesn't belong to this
  host ie. _destroy_evacuated_instances. But this function only checks
  if the local instance belongs to the host or not. It doesn't check the
  task_state.

  Suppose a live-migration is in progress and the destination compute
  manager is restarted, it will find the migrating instance as not
  belonging to the host and destroy it. This can result in two outomes:

  1. If live-migration is in progress, then the source hypervisor would hang, 
so a rollback is possible to trigger by killing the job.
  2. However, if live-migration is completed and the 
post-live-migration-destination is messaged then by the time the compute 
manager gets to processing the message, the instance data would have been 
deleted. Subsequent periodic tasks would only get as far as defining the VM but 
there wouldn't be any disks left.

  014-05-08 20:42:33.058 16724 WARNING nova.virt.libvirt.driver [-] Periodic 
task is updating the host stat, it is trying to get disk instance-0002, but 
disk file was removed by concurrent operations such as resize.
  2014-05-08 20:43:33.370 16724 WARNING nova.virt.libvirt.driver [-] Periodic 
task is updating the host stat, it is trying to get disk instance-0002, but 
disk file was removed by concurrent operations such as resize.

  Steps to reproduce:

  1. Start live-migration
  2. Wait for pre-live-migration to define the destination VM
  3. Restart destination compute manager

  To see what happens for case 2 (Live-migration having completed), put
  a breakpoint in init_host and delay till instance is running on the
  destination and then continue the nova-compute. In this case you'll
  end up with the instance directory like this:

  
  ls -l 06ddbe13-577b-4f9f-ac52-0c038aec04d8
  total 8
  -rw-r--r-- 1 root root   89 May  8 19:59 disk.info
  -rw-r--r-- 1 root root 1548 May  8 19:59 libvirt.xml

  I verified this in a tripleo devtest environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1319797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1305062] [NEW] live migration doesn't work for VMs in paused or rescued states

2014-04-09 Thread Loganathan Parthipan
Public bug reported:

I'd expect live-migration to migrate instances in paused or rescued
states as they're online VMs anyway, as opposed to those that are
shutoff or suspended. However, I see that it's currently disabled in the
compute API.

nova live-migration --block-migrate b32b0eb4-b8aa-4204-b104-61e733839227 
overcloud-novacompute0-aipxwbabgq7a
ERROR: Cannot 'os-migrateLive' while instance is in vm_state paused (HTTP 409) 
(Request-ID: req-d5b47ef0-6674-48dd-b4ac-eea86951246c)

nova live-migration --block-migrate b32b0eb4-b8aa-4204-b104-61e733839227 
overcloud-novacompute0-aipxwbabgq7a
ERROR: Cannot 'os-migrateLive' while instance is in vm_state rescued (HTTP 409) 
(Request-ID: req-5eea8f1c-5602-4d48-9690-929baff3b560)

This is stopped by this decorator:

@check_instance_state(vm_state=[vm_states.ACTIVE])  
  
def live_migrate(self, context, instance, block_migration,


However, there are more to it than fixing the decorator.

Should we treat this as a feature or bugfix?

** Affects: nova
 Importance: Undecided
 Assignee: Loganathan Parthipan (parthipan)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Loganathan Parthipan (parthipan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1305062

Title:
  live migration doesn't work for VMs in paused or rescued states

Status in OpenStack Compute (Nova):
  New

Bug description:
  I'd expect live-migration to migrate instances in paused or rescued
  states as they're online VMs anyway, as opposed to those that are
  shutoff or suspended. However, I see that it's currently disabled in
  the compute API.

  nova live-migration --block-migrate b32b0eb4-b8aa-4204-b104-61e733839227 
overcloud-novacompute0-aipxwbabgq7a
  ERROR: Cannot 'os-migrateLive' while instance is in vm_state paused (HTTP 
409) (Request-ID: req-d5b47ef0-6674-48dd-b4ac-eea86951246c)

  nova live-migration --block-migrate b32b0eb4-b8aa-4204-b104-61e733839227 
overcloud-novacompute0-aipxwbabgq7a
  ERROR: Cannot 'os-migrateLive' while instance is in vm_state rescued (HTTP 
409) (Request-ID: req-5eea8f1c-5602-4d48-9690-929baff3b560)

  This is stopped by this decorator:

  @check_instance_state(vm_state=[vm_states.ACTIVE])

  def live_migrate(self, context, instance, block_migration,

  
  However, there are more to it than fixing the decorator.

  Should we treat this as a feature or bugfix?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1305062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1256344] Re: jsonschema min_version requirements are not sufficient

2014-02-27 Thread Loganathan Parthipan
I think this bug can be closed as I don't see this any more.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1256344

Title:
  jsonschema min_version requirements are not sufficient

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  The unit tests are failing with the current min_version for
  jsonschema==1.3.0. This is due to the following reason:

  validators is a 'dict' in 1.3.0 so there's no extend method. In 2.0.0
  it's a class with extend method defined. So it's better to update
  min_version to 2.0.0.

  The errors seen are:

  ==
  FAIL: 
nova.tests.test_api_validation.AdditionalPropertiesDisableTestCase.test_validate_additionalProperties_disable
  --
  _StringException: Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File 
"/tmp/buildd/nova-2014.1.dev990.g2c795f0/nova/tests/test_api_validation.py", 
line 133, in setUp
  @validation.schema(request_body_schema=schema)
File 
"/tmp/buildd/nova-2014.1.dev990.g2c795f0/nova/api/validation/__init__.py", line 
36, in schema
  schema_validator = _SchemaValidator(request_body_schema)
File 
"/tmp/buildd/nova-2014.1.dev990.g2c795f0/nova/api/validation/validators.py", 
line 51, in __init__
  validator_cls = jsonschema.validators.extend(self.validator_org,
  AttributeError: 'dict' object has no attribute 'extend'

  
  ==
  FAIL: 
nova.tests.test_api_validation.AdditionalPropertiesEnableTestCase.test_validate_additionalProperties_enable
  --
  _StringException: Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File 
"/tmp/buildd/nova-2014.1.dev990.g2c795f0/nova/tests/test_api_validation.py", 
line 106, in setUp
  @validation.schema(request_body_schema=schema)
File 
"/tmp/buildd/nova-2014.1.dev990.g2c795f0/nova/api/validation/__init__.py", line 
36, in schema
  schema_validator = _SchemaValidator(request_body_schema)
File 
"/tmp/buildd/nova-2014.1.dev990.g2c795f0/nova/api/validation/validators.py", 
line 51, in __init__
  validator_cls = jsonschema.validators.extend(self.validator_org,
  AttributeError: 'dict' object has no attribute 'extend'

  
  ==
  FAIL: 
nova.tests.test_api_validation.IntegerRangeTestCase.test_validate_integer_range
  --
  _StringException: Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File 
"/tmp/buildd/nova-2014.1.dev990.g2c795f0/nova/tests/test_api_validation.py", 
line 302, in setUp
  @validation.schema(request_body_schema=schema)
File 
"/tmp/buildd/nova-2014.1.dev990.g2c795f0/nova/api/validation/__init__.py", line 
36, in schema
  schema_validator = _SchemaValidator(request_body_schema)
File 
"/tmp/buildd/nova-2014.1.dev990.g2c795f0/nova/api/validation/validators.py", 
line 51, in __init__
  validator_cls = jsonschema.validators.extend(self.validator_org,
  AttributeError: 'dict' object has no attribute 'extend'

  
  ==
  FAIL: nova.tests.test_api_validation.IntegerTestCase.test_validate_integer
  --
  _StringException: Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File 
"/tmp/buildd/nova-2014.1.dev990.g2c795f0/nova/tests/test_api_validation.py", 
line 245, in setUp
  @validation.schema(request_body_schema=schema)
File 
"/tmp/buildd/nova-2014.1.dev990.g2c795f0/nova/api/validation/__init__.py", line 
36, in schema
  schema_validator = _SchemaValidator(request_body_schema)
File 
"/tmp/buildd/nova-2014.1.dev990.g2c795f0/nova/api/validation/validators.py", 
line 51, in __init__
  validator_cls = jsonschema.validators.extend(self.validator_org,
  AttributeError: 'dict' object has no attribute 'extend'

  
  ==
  FAIL: 
nova.tests.test_api_validation.StringLengthTestCase.test_validate_string_length
  --
  _StringException: Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File 
"/tmp/buildd/nova-2014.1.dev990.g2c795f0/nova/tests/test_api_validation.py", 
line 207, in setUp
  @validation.schema(request_body_schema=sche

[Yahoo-eng-team] [Bug 1276214] [NEW] Live migration failure in API doesn't revert task_state to None

2014-02-04 Thread Loganathan Parthipan
Public bug reported:

If API times out on a RPC during the processing of a migrate_server it
does not revert the task_state back to NULL before or after sending the
error response back to the user. This can prevent further API operations
on the VM and leave a good VMs in non-operable state with the exception
of perhaps a delete.

This is one possible reproducer. I'm not sure if this is always true,
and I'd appreciate if someone else confirm it.

1. Somehow make RPC requests hang
2. Issue a live migration request
3. The call should return an HTTP error (409 perhaps)
4. Check VM. It should be in a good state but the task_state stuck in 
'migrating'

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276214

Title:
  Live migration failure in API doesn't revert task_state to None

Status in OpenStack Compute (Nova):
  New

Bug description:
  If API times out on a RPC during the processing of a migrate_server it
  does not revert the task_state back to NULL before or after sending
  the error response back to the user. This can prevent further API
  operations on the VM and leave a good VMs in non-operable state with
  the exception of perhaps a delete.

  This is one possible reproducer. I'm not sure if this is always true,
  and I'd appreciate if someone else confirm it.

  1. Somehow make RPC requests hang
  2. Issue a live migration request
  3. The call should return an HTTP error (409 perhaps)
  4. Check VM. It should be in a good state but the task_state stuck in 
'migrating'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1276214/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276154] [NEW] suspending a paused instance

2014-02-04 Thread Loganathan Parthipan
Public bug reported:

Is there a compelling reason why we don't support suspending a paused
instance? At the moment we only allow 'active' and 'rescued' states to
be suspended.

In compute/api:

@check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.RESCUED])

Trying to suspend a paused instance results in:

nova suspend b10f8175-1663-41b2-8533-0b0606f369ff
ERROR: Cannot 'suspend' while instance is in vm_state paused (HTTP 409) 
(Request-ID: req-7349d554-ff48-4155-a62b-967f0813c59c)

I haven't tested with other hypervisors, but as far as libvirt/kvm is
concerned it suspends (virsh managedsave) both states ('running',
'paused') and resumes (virsh start) a suspended instance to the pre-
suspended state.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Is there a particular reason why we don't support suspending a paused
  instance? At the moment we only allow 'active' and 'rescued' states to
  be suspended.
  
  In compute/api:
  
  @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.RESCUED])
  
  Trying to suspend a paused instance results in:
  
  nova suspend b10f8175-1663-41b2-8533-0b0606f369ff
  ERROR: Cannot 'suspend' while instance is in vm_state paused (HTTP 409) 
(Request-ID: req-7349d554-ff48-4155-a62b-967f0813c59c)
  
  I haven't tested with other hypervisors, but as far as libvirt/kvm is
- concerned it resumes a suspended instance to the pre-suspended state.
+ concerned it suspends (virsh managedsave) both states ('running',
+ 'paused') and resumes (virsh start) a suspended instance to the pre-
+ suspended state.

** Description changed:

- Is there a particular reason why we don't support suspending a paused
+ Is there a compelling reason why we don't support suspending a paused
  instance? At the moment we only allow 'active' and 'rescued' states to
  be suspended.
  
  In compute/api:
  
  @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.RESCUED])
  
  Trying to suspend a paused instance results in:
  
  nova suspend b10f8175-1663-41b2-8533-0b0606f369ff
  ERROR: Cannot 'suspend' while instance is in vm_state paused (HTTP 409) 
(Request-ID: req-7349d554-ff48-4155-a62b-967f0813c59c)
  
  I haven't tested with other hypervisors, but as far as libvirt/kvm is
  concerned it suspends (virsh managedsave) both states ('running',
  'paused') and resumes (virsh start) a suspended instance to the pre-
  suspended state.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276154

Title:
  suspending a paused instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  Is there a compelling reason why we don't support suspending a paused
  instance? At the moment we only allow 'active' and 'rescued' states to
  be suspended.

  In compute/api:

  @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.RESCUED])

  Trying to suspend a paused instance results in:

  nova suspend b10f8175-1663-41b2-8533-0b0606f369ff
  ERROR: Cannot 'suspend' while instance is in vm_state paused (HTTP 409) 
(Request-ID: req-7349d554-ff48-4155-a62b-967f0813c59c)

  I haven't tested with other hypervisors, but as far as libvirt/kvm is
  concerned it suspends (virsh managedsave) both states ('running',
  'paused') and resumes (virsh start) a suspended instance to the pre-
  suspended state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1276154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274137] [NEW] Libvirt driver suspend returns before completion of the task

2014-01-29 Thread Loganathan Parthipan
Public bug reported:

The suspend() function in libvirt driver issues a managedsave() call and
returns immediately. This will result in the compute manager setting
task_state to None from 'suspending'.

This is not good. It's possible for a host to be set to reboot assuming
all VMs are in stable states or other Operations on the VM itself may
get through for task_state is None.

It would be good if driver suspend() actually wait for the power_state
to move from PAUSED to SHUTOFF and then return control to the manager.
This would make sure there are no inconsistent task_states reported.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274137

Title:
  Libvirt driver suspend returns before completion of the task

Status in OpenStack Compute (Nova):
  New

Bug description:
  The suspend() function in libvirt driver issues a managedsave() call
  and returns immediately. This will result in the compute manager
  setting task_state to None from 'suspending'.

  This is not good. It's possible for a host to be set to reboot
  assuming all VMs are in stable states or other Operations on the VM
  itself may get through for task_state is None.

  It would be good if driver suspend() actually wait for the power_state
  to move from PAUSED to SHUTOFF and then return control to the manager.
  This would make sure there are no inconsistent task_states reported.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274137/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270825] [NEW] Live block migration fails for instances whose glance images have been deleted

2014-01-20 Thread Loganathan Parthipan
Public bug reported:

Once the glance image from which an instance was spawned is deleted it's
not possible to block migrate this instance.

To recreate:

1. Boot an instance off a public image or snapshot
2. Delete the image from glance
3. Live block migrate this instance. It will fail at pre-live-migration stage 
as the image could not be downloaded.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270825

Title:
  Live block migration fails for instances whose glance images have been
  deleted

Status in OpenStack Compute (Nova):
  New

Bug description:
  Once the glance image from which an instance was spawned is deleted
  it's not possible to block migrate this instance.

  To recreate:

  1. Boot an instance off a public image or snapshot
  2. Delete the image from glance
  3. Live block migrate this instance. It will fail at pre-live-migration stage 
as the image could not be downloaded.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260301] [NEW] Expired vnc token doesn't fail fast

2013-12-12 Thread Loganathan Parthipan
Public bug reported:

If an expired vnc token is used, I'd expect an appropriate HTTP
response. However, currently novnc just hangs at "Starting VNC
handshake".

To reproduce

1. nova get-vnc-console  novnc => returns a URI with the token
2. point browser to the returned URI after the expiry
3. page hangs at "Starting VNC handshake"

Expected response

An appropriate HTTP response

** Affects: nova
 Importance: Undecided
 Status: New

** Summary changed:

- Expired vnc token doesn't result in an HTTP 401
+ Expired vnc token doesn't fail fast

** Description changed:

- If an expired vnc token is used, I'd expect an HTTP 401. However,
- currently novnc just hangs at "Starting VNC handshake".
+ If an expired vnc token is used, I'd expect an appropriate HTTP
+ response. However, currently novnc just hangs at "Starting VNC
+ handshake".
  
  To reproduce
  
  1. nova get-vnc-console  novnc => returns a URI with the token
  2. point browser to the returned URI after the expiry
  3. page hangs at "Starting VNC handshake"
  
  Expected response
  
- HTTP 401
+ An appropriate HTTP response

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260301

Title:
  Expired vnc token doesn't fail fast

Status in OpenStack Compute (Nova):
  New

Bug description:
  If an expired vnc token is used, I'd expect an appropriate HTTP
  response. However, currently novnc just hangs at "Starting VNC
  handshake".

  To reproduce

  1. nova get-vnc-console  novnc => returns a URI with the token
  2. point browser to the returned URI after the expiry
  3. page hangs at "Starting VNC handshake"

  Expected response

  An appropriate HTTP response

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp