All,
As most know, the upgrade from 4.0 to 4.1 changed the interface naming schema.
When a host in a cluster is rebooted, the interface naming changes. When this
occurs, live migration breaks to that host.
Example config:
All Management and hosts running CS 4.1.1
Hypervisor: KVM on RHEL 6.3
Host 1 has older 4.0 interface naming schema
Host 2 was rebooted and has newer interface schema
Live Migration is looking for older interface schema name (i.e.
cloudVirBr<vlan>) when attempting a migration from Host 1 to Host 2.
Here's a sample log:
2013-08-05 16:45:21,846 DEBUG [agent.transport.Request]
(Job-Executor-34:job-1660) Seq 19-1921285594: Sending { Cmd , MgmtId:
159090354809909, via: 19, Ver: v1, Flags: 100111,
[{"MigrateCommand":{"vmName":"i-44-255-VM","destIp":"<hostip>","hostGuid":"91e9b633-f46b-31f3-9a4b-92285fd94b62-LibvirtComputingResource","isWindows":false,"wait":0}}]
}
2013-08-05 16:45:21,926 DEBUG [agent.transport.Request] (StatsCollector-1:null)
Seq 1-1768126050: Received: { Ans: , MgmtId: 159090354809909, via: 1, Ver: v1,
Flags: 10, { GetVmStatsAnswer } }
2013-08-05 16:45:21,963 DEBUG [agent.manager.AgentManagerImpl]
(AgentManager-Handler-7:null) Ping from 5
2013-08-05 16:45:22,012 DEBUG [agent.transport.Request]
(AgentManager-Handler-9:null) Seq 19-1921285594: Processing: { Ans: , MgmtId:
159090354809909, via: 19, Ver: v1, Flags: 110,
[{"MigrateAnswer":{"result":false,"details":"Cannot get interface MTU on
'cloudVirBr18': No such device","wait":0}}] }
2013-08-05 16:45:22,012 DEBUG [agent.manager.AgentAttache]
(AgentManager-Handler-9:null) Seq 19-1921285594: No more commands found
2013-08-05 16:45:22,012 DEBUG [agent.transport.Request]
(Job-Executor-34:job-1660) Seq 19-1921285594: Received: { Ans: , MgmtId:
159090354809909, via: 19, Ver: v1, Flags: 110, { MigrateAnswer } }
2013-08-05 16:45:22,012 ERROR [cloud.vm.VirtualMachineManagerImpl]
(Job-Executor-34:job-1660) Unable to migrate due to Cannot get interface MTU on
'cloudVirBr18': No such device
2013-08-05 16:45:22,013 INFO [cloud.vm.VirtualMachineManagerImpl]
(Job-Executor-34:job-1660) Migration was unsuccessful. Cleaning up:
VM[User|app01-dev]
2013-08-05 16:45:22,018
Is there any current way to change the destination network CS Management uses
so that a complete VM shutdown and restart isn't required to re-enable
migration between hosts?
Any ideas would be appreciated.
- Si