[Yahoo-eng-team] [Bug 1562266] Re: VMware: resize instance change instance's hypervisor_hostname

2016-03-30 Thread Luo Gangyi
I met the same problem.

And I believe it is definitely a bug!

nova's behavior is inconsistent with vmware driver does. And it will
cause all following operation fail!

If vmware do not support cross-cluster resize, we should force nova-
scheduler choose the same cluster.

** Changed in: nova
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1562266

Title:
  VMware: resize instance change instance's  hypervisor_hostname

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Nova version: master
  Virt driver: VCDriver

  Nova compute1   <-->   VMware cluster1
  Nova compute2   <-->   VMware cluster2

  Resize an instance(original hypervisor_hostname is
  domain-c9(cluster2)) in VMware, makes instance's hostname
  changed(Before we verify,  hypervisor_hostname had changed to
  domain-c7(cluster1)).

  Then we checked in vCenter, the instance still located in cluster2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1562266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563325] [NEW] KeyError: 'summary.guest.toolsStatus' in vmware driver

2016-03-29 Thread Luo Gangyi
Public bug reported:

When I do a hard reboot of in vmware instances which do not have vmware
tools , I get following error

2016-03-29 21:11:02.463 8893 ERROR oslo_messaging.rpc.dispatcher 
[req-181921d8-7253-4bfc-b613-6fbae41824b4 4412e38ec9814b96a03e63097ec51f1a 
8f75187cd29f4715881f450646fc6e08 - - -] Exception during message handling: 
'summary.guest.toolsStatus'
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6751, in 
reboot_instance
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher 
reboot_type)
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher payload)
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 327, in 
decorated_function
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance_uuid=instance_uuid)
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 298, in 
decorated_function
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 377, in 
decorated_function
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 355, in 
decorated_function
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 343, in 
decorated_function
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3211, in 
reboot_instance
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher 
self._set_instance_obj_error_state(context, instance)
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3192, in 
reboot_instance
2016-03-29 21:11:02.463 8893 TRACE oslo_messaging.rpc.dispatcher 
bad

[Yahoo-eng-team] [Bug 1460536] [NEW] nova rescue do not actual work

2015-06-01 Thread Luo Gangyi
Public bug reported:

nova rescue do not actual works in a lot of situation.

Although nova rescue generate the right libvirt.xml (at least in my
opinion), the virtual machine OS do not use the rescue disk to boot. It
still use the origin disk to boot(I tested it in icehouse,Juno,Kilo).

I am not sure it is the bug of libvirt/qemu or it is because of the
wrong configuration of OS inside VM.

How to reproduceļ¼š

1. Download a image(for
example,http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2)
and upload it to glance.

2. Create an instance by using above image.

3. touch a file in the instance.

4. nova rescue [instance-id]

You can see the file you touch is still there, which indicates the OS of
the VM still boot from the original disk.

If you use #df -h ,you wiil file the OS is using /dev/vdb1 as root file
system.

===
I think the possible reason is /etc/fstab use disk UUID as block device name, 
and all the instance from one image share the same UUID, which confuse OS when 
it has two disk with same UUID.

If I use /dev/vda1 instead of UUID , it seems work correctly.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1460536

Title:
  nova rescue do not actual work

Status in OpenStack Compute (Nova):
  New

Bug description:
  nova rescue do not actual works in a lot of situation.

  Although nova rescue generate the right libvirt.xml (at least in my
  opinion), the virtual machine OS do not use the rescue disk to boot.
  It still use the origin disk to boot(I tested it in
  icehouse,Juno,Kilo).

  I am not sure it is the bug of libvirt/qemu or it is because of the
  wrong configuration of OS inside VM.

  How to reproduceļ¼š

  1. Download a image(for
  
example,http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2)
  and upload it to glance.

  2. Create an instance by using above image.

  3. touch a file in the instance.

  4. nova rescue [instance-id]

  You can see the file you touch is still there, which indicates the OS
  of the VM still boot from the original disk.

  If you use #df -h ,you wiil file the OS is using /dev/vdb1 as root
  file system.

  ===
  I think the possible reason is /etc/fstab use disk UUID as block device name, 
and all the instance from one image share the same UUID, which confuse OS when 
it has two disk with same UUID.

  If I use /dev/vda1 instead of UUID , it seems work correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1460536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp