Dear all,
I don't know if anybody else experienced problems when executing KVM live
migrations with Tashi, but I experienced some errors related to VM states. I
think the problem lies inside prepReceiveVm method, which is inside
nodemanagerservice.py and is called by CM.
The prepReceiveVm method sets the instance state to MigratePrep
("instance.state = InstanceState.MigratePrep"), which does not make sense,
because the state of this instance was already set to MigratePrep inside
clustermanagerservice.py migrateVm method (see "self.stateTransition(instance,
InstanceState.Running, InstanceState.MigratePrep)"). So the source VM state is
not updated and this causes an exception being thrown all the time inside
stateTransition method.
If it makes sense for you, the solution could be:
1) change prepReceiveVm inside nodemanagerservice.py to (remove the first line)
def prepReceiveVm(self, instance, source):
instance.vmId = -1
transportCookie = self.vmm.prepReceiveVm(instance, source.name)
return transportCookie
2) add prepSourceVm to nodemanagerservice.py and add this method in the
nodeManagerRPCs list inside rpycservices.py
def prepSourceVm(self, vmId):
instance = self.getInstance(vmId)
instance.state = InstanceState.MigratePrep
3) add the following first three lines before prepReceiveVm call inside
migrateVm inside clustermanagerservice.py
# Set source instance state
self.log.info("migrateVm: prepSourceVm will be executed on the target host")
self.proxy[sourceHost.name].prepSourceVm(instance.vmId)
# Prepare the target
self.log.info("migrateVm: prepReceiveVm will be executed on the target host")
cookie = self.proxy[targetHost.name].prepReceiveVm(instance, sourceHost)
Best regards,
Miha