That means the imag for VR is not in 2nd storage.
Why don't you try register template for system vm?
Delete the current one and register it with the same name as global
configuration.

James.

On Wednesday, July 15, 2015, Sonali Jadhav <son...@servercentralen.se>
wrote:

> its vhd under template folder of secondary storage mount.
>
>
> ---- giljae o wrote ----
>
> What is this?
>
>  "f71666cc-2510-43f7-8748-6c693a4a0716")']
>
> On Tuesday, July 14, 2015, Sonali Jadhav <son...@servercentralen.se
> <javascript:;>> wrote:
>
> > Aha, this could be problem,  I found this on pool master SMlog
> >
> >
> > Jul 14 13:53:32 SolXS01 SM: [12043] missing config for vdi:
> > f71666cc-2510-43f7-8748-6c693a4a0716
> > Jul 14 13:53:32 SolXS01 SM: [12043] new VDIs on disk:
> > set(['f71666cc-2510-43f7-8748-6c693a4a0716'])
> > Jul 14 13:53:32 SolXS01 SM: [12043] Introducing VDI with
> > location=f71666cc-2510-43f7-8748-6c693a4a0716
> > Jul 14 13:53:32 SolXS01 SM: [12049] lock: opening lock file
> > /var/lock/sm/e7d676cf-79ab-484a-8722-73d509b4c222/sr
> > Jul 14 13:53:32 SolXS01 SM: [12043] lock: released
> > /var/lock/sm/e7d676cf-79ab-484a-8722-73d509b4c222/sr
> > Jul 14 13:53:32 SolXS01 SM: [12043] ***** sr_scan: EXCEPTION
> > XenAPI.Failure, ['INTERNAL_ERROR',
> > 'Db_exn.Uniqueness_constraint_violation("VDI", "uuid",
> > "f71666cc-2510-43f7-8748-6c693a4a0716")']
> > Jul 14 13:53:32 SolXS01 SM: [12043]   File
> > "/opt/xensource/sm/SRCommand.py", line 110, in run
> > Jul 14 13:53:32 SolXS01 SM: [12043]     return self._run_locked(sr)
> > Jul 14 13:53:32 SolXS01 SM: [12043]   File
> > "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked
> > Jul 14 13:53:32 SolXS01 SM: [12043]     rv = self._run(sr, target)
> > Jul 14 13:53:32 SolXS01 SM: [12043]   File
> > "/opt/xensource/sm/SRCommand.py", line 331, in _run
> > Jul 14 13:53:32 SolXS01 SM: [12043]     return
> > sr.scan(self.params['sr_uuid'])
> > Jul 14 13:53:32 SolXS01 SM: [12043]   File "/opt/xensource/sm/FileSR",
> > line 206, in scan
> > Jul 14 13:53:32 SolXS01 SM: [12043]     return super(FileSR,
> > self).scan(sr_uuid)
> > Jul 14 13:53:32 SolXS01 SM: [12043]   File "/opt/xensource/sm/SR.py",
> line
> > 317, in scan
> > Jul 14 13:53:32 SolXS01 SM: [12043]     scanrecord.synchronise()
> > Jul 14 13:53:32 SolXS01 SM: [12043]   File "/opt/xensource/sm/SR.py",
> line
> > 580, in synchronise
> > Jul 14 13:53:32 SolXS01 SM: [12043]     self.synchronise_new()
> > Jul 14 13:53:32 SolXS01 SM: [12043]   File "/opt/xensource/sm/SR.py",
> line
> > 553, in synchronise_new
> > Jul 14 13:53:32 SolXS01 SM: [12043]     vdi._db_introduce()
> > Jul 14 13:53:32 SolXS01 SM: [12043]   File "/opt/xensource/sm/VDI.py",
> > line 302, in _db_introduce
> > Jul 14 13:53:32 SolXS01 SM: [12043]     vdi =
> > self.sr.session.xenapi.VDI.db_introduce(uuid, self.label,
> self.description,
> > self.sr.sr_ref, ty, self.shareable, self.read_only, {}, self.location,
> {},
> > sm_config, self.managed, str(self.size), str(self.utilisation),
> > metadata_of_pool, is_a_snapshot, xmlrpclib.DateTime(snapshot_time),
> > snapshot_of)
> > Jul 14 13:53:32 SolXS01 SM: [12043]   File
> > "/usr/lib/python2.4/site-packages/XenAPI.py", line 245, in __call__
> > Jul 14 13:53:32 SolXS01 SM: [12043]     return self.__send(self.__name,
> > args)
> > Jul 14 13:53:32 SolXS01 SM: [12043]   File
> > "/usr/lib/python2.4/site-packages/XenAPI.py", line 149, in xenapi_request
> > Jul 14 13:53:32 SolXS01 SM: [12043]     result =
> > _parse_result(getattr(self, methodname)(*full_params))
> > Jul 14 13:53:32 SolXS01 SM: [12043]   File
> > "/usr/lib/python2.4/site-packages/XenAPI.py", line 219, in _parse_result
> > Jul 14 13:53:32 SolXS01 SM: [12043]     raise
> > Failure(result['ErrorDescription'])
> > Jul 14 13:53:32 SolXS01 SM: [12043]
> > Jul 14 13:53:32 SolXS01 SMGC: [12049] Found 0 cache files
> > Jul 14 13:53:32 SolXS01 SM: [12049] lock: tried lock
> > /var/lock/sm/e7d676cf-79ab-484a-8722-73d509b4c222/sr, acquired: True
> > (exists: True)
> > Jul 14 13:53:32 SolXS01 SM: [12049] ['/usr/bin/vhd-util', 'scan', '-f',
> > '-c', '-m',
> '/var/run/sr-mount/e7d676cf-79ab-484a-8722-73d509b4c222/*.vhd']
> > Jul 14 13:53:32 SolXS01 SM: [12043] Raising exception [40, The SR scan
> > failed  [opterr=['INTERNAL_ERROR',
> > 'Db_exn.Uniqueness_constraint_violation("VDI", "uuid",
> > "f71666cc-2510-43f7-8748-6c693a4a0716")']]]
> >
> >
> >
> > [root@SolXS01 ~]# ls
> > /var/run/sr-mount/e7d676cf-79ab-484a-8722-73d509b4c222/
> > f71666cc-2510-43f7-8748-6c693a4a0716.vhd
> > [root@SolXS01 ~]#
> >
> >
> > /Sonali
> >
> > -----Original Message-----
> > From: giljae o [mailto:ogil...@gmail.com <javascript:;> <javascript:;>]
> > Sent: Tuesday, July 14, 2015 4:40 PM
> > To: users@cloudstack.apache.org <javascript:;> <javascript:;>
> > Subject: Re: Urgent: VMs not migrated after putting Xenserver host in
> > maintenance mode
> >
> > Hi
> >
> > Sm log is on the xenserver because you can know which mount point is set.
> >
> > Sm log is under /var/log/SM.log
> >
> > James
> >
> >
> > On Tuesday, July 14, 2015, Sonali Jadhav <son...@servercentralen.se
> <javascript:;>
> > <javascript:;>> wrote:
> >
> > > Any clue on this?
> > >
> > > I can understand that it's a problem while creating new VR.
> > >
> > > Catch Exception: class com.xensource.xenapi.Types$UuidInvalid due to
> The
> > > uuid you supplied was invalid.
> > > The uuid you supplied was invalid.
> > >
> > > I am not understanding which uuid is exactly invalid, I need help to
> > trace
> > > issue.
> > >
> > > /Sonali
> > >
> > > -----Original Message-----
> > > From: Sonali Jadhav [mailto:son...@servercentralen.se <javascript:;>
> <javascript:;>
> > <javascript:;>]
> > > Sent: Monday, July 13, 2015 1:42 PM
> > > To: users@cloudstack.apache.org <javascript:;> <javascript:;>
> <javascript:;>
> > > Subject: RE: Urgent: VMs not migrated after putting Xenserver host in
> > > maintenance mode
> > >
> > > Hi,
> > >
> > > That helped. I migrated vms and also in ACS it was syced correctly. Now
> > > all my xenservers in pool are 6.5 .
> > >
> > > But I am having new problem, I am trying to make new vm with isolated
> > > network. But its giving me following error, It looks like its problem
> > while
> > > creating VR. Also I observed that one host has 3 SRs which are
> > > disconnected. I don't know why. It was like that after I rebooted
> server
> > > with updated XS 6.5.
> > >
> > > 015-07-13 08:36:47,975 DEBUG
> [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb)
> > Creating
> > > monitoring services on VM[DomainRouter|r-97-VM] start...
> > > 2015-07-13 08:36:47,982 DEBUG
> > [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb)
> > > Reapplying dhcp entries as a part of domR VM[DomainRouter|r-97-VM]
> > start...
> > > 2015-07-13 08:36:47,984 DEBUG
> > [c.c.n.r.VirtualNetworkApplianceManagerImpl]
> > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb)
> > > Reapplying vm data (userData and metaData) entries as a part of domR
> > > VM[DomainRouter|r-97-VM] start...
> > > 2015-07-13 08:36:48,035 DEBUG [c.c.a.t.Request]
> > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb) Seq
> > > 4-5299892336484951126: Sending  { Cmd , MgmtId: 59778234354585, via:
> > > 4(SeSolXS02), Ver: v1, Flags: 100011,
> > >
> >
> [{"com.cloud.agent.api.StartCommand":{"vm":{"id":97,"name":"r-97-VM","bootloader":"PyGrub","type":"DomainRouter","cpus":1,"minSpeed":500,"maxSpeed":500,"minRam":268435456,"maxRam":268435456,"arch":"x86_64","os":"Debian
> > > GNU/Linux 7(64-bit)","platformEmulator":"Debian Wheezy 7.0
> > > (64-bit)","bootArgs":" template=domP name=r-97-VM eth2ip=100.65.36.119
> > > eth2mask=255.255.255.192 gateway=100.65.36.65 eth0ip=10.1.1.1
> > > eth0mask=255.255.255.0 domain=cs17cloud.internal cidrsize=24
> > > dhcprange=10.1.1.1 eth1ip=169.254.0.120 eth1mask=255.255.0.0
> type=router
> > > disable_rp_filter=true dns1=8.8.8.8
> > >
> >
> dns2=8.8.4.4","enableHA":true,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassword":"0R3TO+O9g+kGxMdtFbt0rw==","params":{},"uuid":"80b6edf0-7301-4985-b2a6-fae64636c5e8","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"2fb465e2-f51f-4b46-8ec2-153fd843c6cf","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"876d490c-a1d4-3bfe-88b7-1bdb2479541b","id":1,"poolType":"NetworkFilesystem","host":"172.16.5.194","path":"/tank/primstore","port":2049,"url":"NetworkFilesystem://
> > >
> >
> 172.16.5.194/tank/primstore/?ROLE=Primary&STOREUUID=876d490c-a1d4-3bfe-88b7-1bdb2479541b
> > >
> >
> "}},"name":"ROOT-97","size":2684354560,"path":"b9b23a67-9bfe-485a-906c-dfe8282fe868","volumeId":133,"vmName":"r-97-VM","accountId":23,"format":"VHD","provisioningType":"THIN","id":133,"deviceId":0,"hypervisorType":"XenServer"}},"diskSeq":0,"path":"b9b23a67-9bfe-485a-906c-dfe8282fe868","type":"ROOT","_details":{"managed":"false","storagePort":"2049","storageHost":"172.16.5.194","volumeSize":"2684354560"}}],"nics":[{"deviceId":2,"networkRateMbps":200,"defaultNic":true,"pxeDisable":true,"nicUuid":"f699a9b6-cc02-4e7e-805b-0005d69eadac","uuid":"1b5905ad-12b0-4594-be02-26aa753a640d","ip":"100.65.36.119","netmask":"255.255.255.192","gateway":"100.65.36.65","mac":"06:14:88:00:01:14","dns1":"8.8.8.8","dns2":"8.8.4.4","broadcastType":"Vlan","type":"Public","broadcastUri":"vlan://501","isolationUri":"vlan://501","isSecurityGroupEnabled":false,"name":"public"},{"deviceId":0,"networkRateMbps":200,"defaultNic":false,"pxeDisable":true,"nicUuid":"52f8e291-c671-4bfe-b37b-9a0af82f09fd","uuid":"2a9f3c45-cdcf-4f39-a97c-ac29f1c21888","ip":"10.1.1.1","netmask":"255.255.255.0","mac":"02:00:73:a2:00:02","dns1":"8.8.8.8","dns2":"8.8.4.4","broadcastType":"Vlan","type":"Guest","broadcastUri":"vlan://714","isolationUri":"vlan://714","isSecurityGroupEnabled":false,"name":"guest"},{"deviceId":1,"networkRateMbps":-1,"defaultNic":false,"pxeDisable":true,"nicUuid":"b1b8c3d6-e1e6-4575-8371-5c9b3e3a0c66","uuid":"527ed501-3b46-4d98-8e0a-d8d299870f32","ip":"169.254.0.120","netmask":"255.255.0.0","gateway":"169.254.0.1","mac":"0e:00:a9:fe:00:78","broadcastType":"LinkLocal","type":"Control","isSecurityGroupEnabled":false}]},"hostIp":"172.16.5.198","executeInSequence":false,"wait":0}},{"com.cloud.agent.api.check.CheckSshCommand":{"ip":"169.254.0.120","port":3922,"interval":6,"retries":100,"name":"r-97-VM","wait":0}},{"com.cloud.agent.api.GetDomRVersionCmd":{"accessDetails":{"
> > > router.name
> > >
> >
> ":"r-97-VM","router.ip":"169.254.0.120"},"wait":0}},{},{"com.cloud.agent.api.routing.AggregationControlCommand":{"action":"Start","accessDetails":{"router.guest.ip":"10.1.1.1","
> > > router.name
> > >
> >
> ":"r-97-VM","router.ip":"169.254.0.120"},"wait":0}},{"com.cloud.agent.api.routing.IpAssocCommand":{"ipAddresses":[{"accountId":23,"publicIp":"100.65.36.119","sourceNat":true,"add":true,"oneToOneNat":false,"firstIP":true,"broadcastUri":"vlan://501","vlanGateway":"100.65.36.65","vlanNetmask":"255.255.255.192","vifMacAddress":"06:af:70:00:01:14","networkRate":200,"trafficType":"Public","networkName":"public","newNic":false}],"accessDetails":{"zone.network.type":"Advanced","
> > > router.name
> > >
> >
> ":"r-97-VM","router.ip":"169.254.0.120","router.guest.ip":"10.1.1.1"},"wait":0}},{"com.cloud.agent.api.routing.SetFirewallRulesCommand":{"rules":[{"id":0,"srcIp":"","protocol":"all","revoked":false,"alreadyAdded":false,"sourceCidrList":[],"purpose":"Firewall","trafficType":"Egress","defaultEgressPolicy":false}],"accessDetails":{"router.guest.ip":"10.1.1.1","firewall.egress.default":"System","zone.network.type":"Advanced","router.ip":"169.254.0.120","
> > > router.name
> > >
> >
> ":"r-97-VM"},"wait":0}},{"com.cloud.agent.api.routing.SetMonitorServiceCommand":{"services":[{"id":0,"service":"dhcp","processname":"dnsmasq","serviceName":"dnsmasq","servicePath":"/var/run/dnsmasq/dnsmasq.pid","pidFile":"/var/run/dnsmasq/dnsmasq.pid","isDefault":false},{"id":0,"service":"loadbalancing","processname":"haproxy","serviceName":"haproxy","servicePath":"/var/run/haproxy.pid","pidFile":"/var/run/haproxy.pid","isDefault":false},{"id":0,"service":"ssh","processname":"sshd","serviceName":"ssh","servicePath":"/var/run/sshd.pid","pidFile":"/var/run/sshd.pid","isDefault":true},{"id":0,"service":"webserver","processname":"apache2","serviceName":"apache2","servicePath":"/var/run/apache2.pid","pidFile":"/var/run/apache2.pid","isDefault":true}],"accessDetails":{"
> > > router.name
> > >
> >
> ":"r-97-VM","router.ip":"169.254.0.120","router.guest.ip":"10.1.1.1"},"wait":0}},{"com.cloud.agent.api.routing.AggregationControlCommand":{"action":"Finish","accessDetails":{"router.guest.ip":"10.1.1.1","
> > > router.name":"r-97-VM","router.ip":"169.254.0.120"},"wait":0}}] }
> > > 2015-07-13 08:36:48,036 DEBUG [c.c.a.t.Request]
> > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb) Seq
> > > 4-5299892336484951126: Executing:  { Cmd , MgmtId: 59778234354585, via:
> > > 4(SeSolXS02), Ver: v1, Flags: 100011,
> > >
> >
> [{"com.cloud.agent.api.StartCommand":{"vm":{"id":97,"name":"r-97-VM","bootloader":"PyGrub","type":"DomainRouter","cpus":1,"minSpeed":500,"maxSpeed":500,"minRam":268435456,"maxRam":268435456,"arch":"x86_64","os":"Debian
> > > GNU/Linux 7(64-bit)","platformEmulator":"Debian Wheezy 7.0
> > > (64-bit)","bootArgs":" template=domP name=r-97-VM eth2ip=100.65.36.119
> > > eth2mask=255.255.255.192 gateway=100.65.36.65 eth0ip=10.1.1.1
> > > eth0mask=255.255.255.0 domain=cs17cloud.internal cidrsize=24
> > > dhcprange=10.1.1.1 eth1ip=169.254.0.120 eth1mask=255.255.0.0
> type=router
> > > disable_rp_filter=true dns1=8.8.8.8
> > >
> >
> dns2=8.8.4.4","enableHA":true,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassword":"0R3TO+O9g+kGxMdtFbt0rw==","params":{},"uuid":"80b6edf0-7301-4985-b2a6-fae64636c5e8","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"2fb465e2-f51f-4b46-8ec2-153fd843c6cf","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"876d490c-a1d4-3bfe-88b7-1bdb2479541b","id":1,"poolType":"NetworkFilesystem","host":"172.16.5.194","path":"/tank/primstore","port":2049,"url":"NetworkFilesystem://
> > >
> >
> 172.16.5.194/tank/primstore/?ROLE=Primary&STOREUUID=876d490c-a1d4-3bfe-88b7-1bdb2479541b
> > >
> >
> "}},"name":"ROOT-97","size":2684354560,"path":"b9b23a67-9bfe-485a-906c-dfe8282fe868","volumeId":133,"vmName":"r-97-VM","accountId":23,"format":"VHD","provisioningType":"THIN","id":133,"deviceId":0,"hypervisorType":"XenServer"}},"diskSeq":0,"path":"b9b23a67-9bfe-485a-906c-dfe8282fe868","type":"ROOT","_details":{"managed":"false","storagePort":"2049","storageHost":"172.16.5.194","volumeSize":"2684354560"}}],"nics":[{"deviceId":2,"networkRateMbps":200,"defaultNic":true,"pxeDisable":true,"nicUuid":"f699a9b6-cc02-4e7e-805b-0005d69eadac","uuid":"1b5905ad-12b0-4594-be02-26aa753a640d","ip":"100.65.36.119","netmask":"255.255.255.192","gateway":"100.65.36.65","mac":"06:14:88:00:01:14","dns1":"8.8.8.8","dns2":"8.8.4.4","broadcastType":"Vlan","type":"Public","broadcastUri":"vlan://501","isolationUri":"vlan://501","isSecurityGroupEnabled":false,"name":"public"},{"deviceId":0,"networkRateMbps":200,"defaultNic":false,"pxeDisable":true,"nicUuid":"52f8e291-c671-4bfe-b37b-9a0af82f09fd","uuid":"2a9f3c45-cdcf-4f39-a97c-ac29f1c21888","ip":"10.1.1.1","netmask":"255.255.255.0","mac":"02:00:73:a2:00:02","dns1":"8.8.8.8","dns2":"8.8.4.4","broadcastType":"Vlan","type":"Guest","broadcastUri":"vlan://714","isolationUri":"vlan://714","isSecurityGroupEnabled":false,"name":"guest"},{"deviceId":1,"networkRateMbps":-1,"defaultNic":false,"pxeDisable":true,"nicUuid":"b1b8c3d6-e1e6-4575-8371-5c9b3e3a0c66","uuid":"527ed501-3b46-4d98-8e0a-d8d299870f32","ip":"169.254.0.120","netmask":"255.255.0.0","gateway":"169.254.0.1","mac":"0e:00:a9:fe:00:78","broadcastType":"LinkLocal","type":"Control","isSecurityGroupEnabled":false}]},"hostIp":"172.16.5.198","executeInSequence":false,"wait":0}},{"com.cloud.agent.api.check.CheckSshCommand":{"ip":"169.254.0.120","port":3922,"interval":6,"retries":100,"name":"r-97-VM","wait":0}},{"com.cloud.agent.api.GetDomRVersionCmd":{"accessDetails":{"
> > > router.name
> > >
> >
> ":"r-97-VM","router.ip":"169.254.0.120"},"wait":0}},{},{"com.cloud.agent.api.routing.AggregationControlCommand":{"action":"Start","accessDetails":{"router.guest.ip":"10.1.1.1","
> > > router.name
> > >
> >
> ":"r-97-VM","router.ip":"169.254.0.120"},"wait":0}},{"com.cloud.agent.api.routing.IpAssocCommand":{"ipAddresses":[{"accountId":23,"publicIp":"100.65.36.119","sourceNat":true,"add":true,"oneToOneNat":false,"firstIP":true,"broadcastUri":"vlan://501","vlanGateway":"100.65.36.65","vlanNetmask":"255.255.255.192","vifMacAddress":"06:af:70:00:01:14","networkRate":200,"trafficType":"Public","networkName":"public","newNic":false}],"accessDetails":{"zone.network.type":"Advanced","
> > > router.name
> > >
> >
> ":"r-97-VM","router.ip":"169.254.0.120","router.guest.ip":"10.1.1.1"},"wait":0}},{"com.cloud.agent.api.routing.SetFirewallRulesCommand":{"rules":[{"id":0,"srcIp":"","protocol":"all","revoked":false,"alreadyAdded":false,"sourceCidrList":[],"purpose":"Firewall","trafficType":"Egress","defaultEgressPolicy":false}],"accessDetails":{"router.guest.ip":"10.1.1.1","firewall.egress.default":"System","zone.network.type":"Advanced","router.ip":"169.254.0.120","
> > > router.name
> > >
> >
> ":"r-97-VM"},"wait":0}},{"com.cloud.agent.api.routing.SetMonitorServiceCommand":{"services":[{"id":0,"service":"dhcp","processname":"dnsmasq","serviceName":"dnsmasq","servicePath":"/var/run/dnsmasq/dnsmasq.pid","pidFile":"/var/run/dnsmasq/dnsmasq.pid","isDefault":false},{"id":0,"service":"loadbalancing","processname":"haproxy","serviceName":"haproxy","servicePath":"/var/run/haproxy.pid","pidFile":"/var/run/haproxy.pid","isDefault":false},{"id":0,"service":"ssh","processname":"sshd","serviceName":"ssh","servicePath":"/var/run/sshd.pid","pidFile":"/var/run/sshd.pid","isDefault":true},{"id":0,"service":"webserver","processname":"apache2","serviceName":"apache2","servicePath":"/var/run/apache2.pid","pidFile":"/var/run/apache2.pid","isDefault":true}],"accessDetails":{"
> > > router.name
> > >
> >
> ":"r-97-VM","router.ip":"169.254.0.120","router.guest.ip":"10.1.1.1"},"wait":0}},{"com.cloud.agent.api.routing.AggregationControlCommand":{"action":"Finish","accessDetails":{"router.guest.ip":"10.1.1.1","
> > > router.name":"r-97-VM","router.ip":"169.254.0.120"},"wait":0}}] }
> > > 2015-07-13 08:36:48,036 DEBUG [c.c.a.m.DirectAgentAttache]
> > > (DirectAgent-434:ctx-819aba7f) Seq 4-5299892336484951126: Executing
> > request
> > > 2015-07-13 08:36:48,043 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > > (DirectAgent-434:ctx-819aba7f) 1. The VM r-97-VM is in Starting state.
> > > 2015-07-13 08:36:48,065 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > > (DirectAgent-434:ctx-819aba7f) Created VM
> > > 14e931b3-c51d-fa86-e2d4-2e25059de732 for r-97-VM
> > > 2015-07-13 08:36:48,069 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > > (DirectAgent-434:ctx-819aba7f) PV args are -- quiet
> > >
> >
> console=hvc0%template=domP%name=r-97-VM%eth2ip=100.65.36.119%eth2mask=255.255.255.192%gateway=100.65.36.65%eth0ip=10.1.1.1%eth0mask=255.255.255.0%domain=cs17cloud.internal%cidrsize=24%dhcprange=10.1.1.1%eth1ip=169.254.0.120%eth1mask=255.255.0.0%type=router%disable_rp_filter=true%dns1=8.8.8.8%dns2=8.8.4.4
> > > 2015-07-13 08:36:48,092 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > > (DirectAgent-434:ctx-819aba7f) VBD e8612817-9d0c-2a6c-136f-5391831336e7
> > > created for com.cloud.agent.api.to.DiskTO@5b2138b
> > > 2015-07-13 08:36:48,101 WARN  [c.c.h.x.r.CitrixResourceBase]
> > > (DirectAgent-434:ctx-819aba7f) Catch Exception: class
> > > com.xensource.xenapi.Types$UuidInvalid due to The uuid you supplied was
> > > invalid.
> > > The uuid you supplied was invalid.
> > >         at com.xensource.xenapi.Types.checkResponse(Types.java:1491)
> > >         at
> com.xensource.xenapi.Connection.dispatch(Connection.java:395)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:462)
> > >         at com.xensource.xenapi.VDI.getByUuid(VDI.java:341)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.createPatchVbd(CitrixResourceBase.java:1580)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.execute(CitrixResourceBase.java:1784)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:489)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.XenServer56Resource.executeRequest(XenServer56Resource.java:64)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.XenServer610Resource.executeRequest(XenServer610Resource.java:87)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.XenServer620SP1Resource.executeRequest(XenServer620SP1Resource.java:65)
> > >         at
> > >
> >
> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:302)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
> > >         at
> > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> > >         at
> > >
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> > >         at
> > >
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> > >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > >         at java.lang.Thread.run(Thread.java:744)
> > > 2015-07-13 08:36:48,102 WARN  [c.c.h.x.r.CitrixResourceBase]
> > > (DirectAgent-434:ctx-819aba7f) Unable to start r-97-VM due to
> > > The uuid you supplied was invalid.
> > >         at com.xensource.xenapi.Types.checkResponse(Types.java:1491)
> > >         at
> com.xensource.xenapi.Connection.dispatch(Connection.java:395)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:462)
> > >         at com.xensource.xenapi.VDI.getByUuid(VDI.java:341)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.createPatchVbd(CitrixResourceBase.java:1580)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.execute(CitrixResourceBase.java:1784)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:489)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.XenServer56Resource.executeRequest(XenServer56Resource.java:64)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.XenServer610Resource.executeRequest(XenServer610Resource.java:87)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.XenServer620SP1Resource.executeRequest(XenServer620SP1Resource.java:65)
> > >         at
> > >
> >
> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:302)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
> > >         at
> > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> > >         at
> > >
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> > >         at
> > >
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> > >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > >         at java.lang.Thread.run(Thread.java:744)
> > > 2015-07-13 08:36:48,124 WARN  [c.c.h.x.r.CitrixResourceBase]
> > > (DirectAgent-434:ctx-819aba7f) Unable to clean up VBD due to
> > > You gave an invalid object reference.  The object may have recently
> been
> > > deleted.  The class parameter gives the type of reference given, and
> the
> > > handle parameter echoes the bad value given.
> > >         at com.xensource.xenapi.Types.checkResponse(Types.java:693)
> > >         at
> com.xensource.xenapi.Connection.dispatch(Connection.java:395)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:462)
> > >         at com.xensource.xenapi.VBD.unplug(VBD.java:1109)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.handleVmStartFailure(CitrixResourceBase.java:1520)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.execute(CitrixResourceBase.java:1871)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:489)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.XenServer56Resource.executeRequest(XenServer56Resource.java:64)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.XenServer610Resource.executeRequest(XenServer610Resource.java:87)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.XenServer620SP1Resource.executeRequest(XenServer620SP1Resource.java:65)
> > >         at
> > >
> >
> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:302)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
> > >         at
> > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> > >         at
> > >
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> > >         at
> > >
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> > >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > >         at java.lang.Thread.run(Thread.java:744)
> > > 2015-07-13 08:36:48,128 WARN  [c.c.h.x.r.CitrixResourceBase]
> > > (DirectAgent-434:ctx-819aba7f) Unable to clean up VBD due to
> > > You gave an invalid object reference.  The object may have recently
> been
> > > deleted.  The class parameter gives the type of reference given, and
> the
> > > handle parameter echoes the bad value given.
> > >         at com.xensource.xenapi.Types.checkResponse(Types.java:693)
> > >         at
> com.xensource.xenapi.Connection.dispatch(Connection.java:395)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:462)
> > >         at com.xensource.xenapi.VBD.unplug(VBD.java:1109)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.handleVmStartFailure(CitrixResourceBase.java:1520)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.execute(CitrixResourceBase.java:1871)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:489)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.XenServer56Resource.executeRequest(XenServer56Resource.java:64)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.XenServer610Resource.executeRequest(XenServer610Resource.java:87)
> > >         at
> > >
> >
> com.cloud.hypervisor.xenserver.resource.XenServer620SP1Resource.executeRequest(XenServer620SP1Resource.java:65)
> > >         at
> > >
> >
> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:302)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> > >         at
> > >
> >
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
> > >         at
> > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> > >         at
> > >
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> > >         at
> > >
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> > >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > >         at java.lang.Thread.run(Thread.java:744)
> > > 2015-07-13 08:36:48,129 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > > (DirectAgent-434:ctx-819aba7f) The VM is in stopped state, detected
> > problem
> > > during startup : r-97-VM
> > > 2015-07-13 08:36:48,129 DEBUG [c.c.a.m.DirectAgentAttache]
> > > (DirectAgent-434:ctx-819aba7f) Seq 4-5299892336484951126: Cancelling
> > > because one of the answers is false and it is stop on error.
> > > 2015-07-13 08:36:48,129 DEBUG [c.c.a.m.DirectAgentAttache]
> > > (DirectAgent-434:ctx-819aba7f) Seq 4-5299892336484951126: Response
> > Received:
> > > 2015-07-13 08:36:48,130 DEBUG [c.c.a.t.Request]
> > > (DirectAgent-434:ctx-819aba7f) Seq 4-5299892336484951126: Processing:
> {
> > > Ans: , MgmtId: 59778234354585, via: 4, Ver: v1, Flags: 10,
> > >
> >
> [{"com.cloud.agent.api.StartAnswer":{"vm":{"id":97,"name":"r-97-VM","bootloader":"PyGrub","type":"DomainRouter","cpus":1,"minSpeed":500,"maxSpeed":500,"minRam":268435456,"maxRam":268435456,"arch":"x86_64","os":"Debian
> > > GNU/Linux 7(64-bit)","platformEmulator":"Debian Wheezy 7.0
> > > (64-bit)","bootArgs":" template=domP name=r-97-VM eth2ip=100.65.36.119
> > > eth2mask=255.255.255.192 gateway=100.65.36.65 eth0ip=10.1.1.1
> > > eth0mask=255.255.255.0 domain=cs17cloud.internal cidrsize=24
> > > dhcprange=10.1.1.1 eth1ip=169.254.0.120 eth1mask=255.255.0.0
> type=router
> > > disable_rp_filter=true dns1=8.8.8.8
> > >
> >
> dns2=8.8.4.4","enableHA":true,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassword":"0R3TO+O9g+kGxMdtFbt0rw==","params":{},"uuid":"80b6edf0-7301-4985-b2a6-fae64636c5e8","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"2fb465e2-f51f-4b46-8ec2-153fd843c6cf","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"876d490c-a1d4-3bfe-88b7-1bdb2479541b","id":1,"poolType":"NetworkFilesystem","host":"172.16.5.194","path":"/tank/primstore","port":2049,"url":"NetworkFilesystem://
> > >
> >
> 172.16.5.194/tank/primstore/?ROLE=Primary&STOREUUID=876d490c-a1d4-3bfe-88b7-1bdb2479541b
> >
> "}},"name":"ROOT-97","size":2684354560,"path":"b9b23a67-9bfe-485a-906c-dfe8282fe868","volumeId":133,"vmName":"r-97-VM","accountId":23,"format":"VHD","provisioningType":"THIN","id":133,"deviceId":0,"hypervisorType":"XenServer"}},"diskSeq":0,"path":"b9b23a67-9bfe-485a-906c-dfe8282fe868","type":"ROOT","_details":{"managed":"false","storagePort":"2049","storageHost":"172.16.5.194","volumeSize":"2684354560"}}],"nics":[{"deviceId":2,"networkRateMbps":200,"defaultNic":true,"pxeDisable":true,"nicUuid":"f699a9b6-cc02-4e7e-805b-0005d69eadac","uuid":"1b5905ad-12b0-4594-be02-26aa753a640d","ip":"100.65.36.119","netmask":"255.255.255.192","gateway":"100.65.36.65","mac":"06:14:88:00:01:14","dns1":"8.8.8.8","dns2":"8.8.4.4","broadcastType":"Vlan","type":"Public","broadcastUri":"vlan://501","isolationUri":"vlan://501","isSecurityGroupEnabled":false,"name":"public"},{"deviceId":0,"networkRateMbps":200,"defaultNic":false,"pxeDisable":true,"nicUuid":"52f8e291-c671-4bfe-b37b-9a0af82f09fd","uuid":"2a9f3c45-cdcf-4f39-a97c-ac29f1c21888","ip":"10.1.1.1","netmask":"255.255.255.0","mac":"02:00:73:a2:00:02","dns1":"8.8.8.8","dns2":"8.8.4.4","broadcastType":"Vlan","type":"Guest","broadcastUri":"vlan://714","isolationUri":"vlan://714","isSecurityGroupEnabled":false,"name":"guest"},{"deviceId":1,"networkRateMbps":-1,"defaultNic":false,"pxeDisable":true,"nicUuid":"b1b8c3d6-e1e6-4575-8371-5c9b3e3a0c66","uuid":"527ed501-3b46-4d98-8e0a-d8d299870f32","ip":"169.254.0.120","netmask":"255.255.0.0","gateway":"169.254.0.1","mac":"0e:00:a9:fe:00:78","broadcastType":"LinkLocal","type":"Control","isSecurityGroupEnabled":false}]},"_iqnToPath":{},"result":false,"details":"Unable
> > > to start r-97-VM due to ","wait":0}}] }
> > > 2015-07-13 08:36:48,130 DEBUG [c.c.a.t.Request]
> > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb) Seq
> > > 4-5299892336484951126: Received:  { Ans: , MgmtId: 59778234354585, via:
> > 4,
> > > Ver: v1, Flags: 10, { StartAnswer } }
> > > 2015-07-13 08:36:48,175 INFO  [c.c.v.VirtualMachineManagerImpl]
> > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb)
> Unable
> > to
> > > start VM on Host[-4-Routing] due to Unable to start r-97-VM due to
> > > 2015-07-13 08:36:48,223 DEBUG [c.c.v.VirtualMachineManagerImpl]
> > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb)
> > Cleaning
> > > up resources for the vm VM[DomainRouter|r-97-VM] in Starting state
> > > 2015-07-13 08:36:48,230 DEBUG [c.c.a.t.Request]
> > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb) Seq
> > > 4-5299892336484951127: Sending  { Cmd , MgmtId: 59778234354585, via:
> > > 4(SeSolXS02), Ver: v1, Flags: 100011,
> > >
> >
> [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":false,"vmName":"r-97-VM","wait":0}}]
> > > }
> > > 2015-07-13 08:36:48,230 DEBUG [c.c.a.t.Request]
> > > (Work-Job-Executor-3:ctx-58f77d9c job-4353/job-4357 ctx-83fe75fb) Seq
> > > 4-5299892336484951127: Executing:  { Cmd , MgmtId: 59778234354585, via:
> > > 4(SeSolXS02), Ver: v1, Flags: 100011,
> > >
> >
> [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":false,"vmName":"r-97-VM","wait":0}}]
> > > }
> > > 2015-07-13 08:36:48,230 DEBUG [c.c.a.m.DirectAgentAttache]
> > > (DirectAgent-53:ctx-de9ca4c0) Seq 4-5299892336484951127: Executing
> > request
> > >
> > >
> > > /Sonali
> > >
> > > -----Original Message-----
> > > From: Remi Bergsma [mailto:r...@remi.nl <javascript:;> <javascript:;>
> <javascript:;>]
> > > Sent: Saturday, July 11, 2015 5:34 PM
> > > To: users@cloudstack.apache.org <javascript:;> <javascript:;>
> <javascript:;>
> > > Subject: Re: Urgent: VMs not migrated after putting Xenserver host in
> > > maintenance mode
> > >
> > > Hi,
> > >
> > > Did you also set the 'removed' column back to NULL (instead of the
> > > date/time it was originally deleted)?
> > >
> > > You can migrate directly from XenServer in 4.5.1, no problem. When the
> > > hypervisor connects to CloudStack again it will report its running VMs
> > and
> > > update the data base. I guess there was a problem in 4.4.3 where
> > > out-of-band migrations would cause a reboot of a router. Not sure if it
> > is
> > > also in 4.5.1. It's fixed in 4.4.4 and also in the upcoming 4.5.2. If
> > your
> > > remaining VMs are not routers, there is no issue. Otherwise you risk a
> > > reboot (which is quite fast anyway).
> > >
> > > I'd first double check the disk offering, also check its tags etc. If
> > that
> > > works, then migrate in CloudStack (as it is supposed to work). If not,
> > you
> > > can do it directly from XenServer in order to empty your host and
> proceed
> > > with the migration. Once the migration is done, fix any remaining
> issues.
> > >
> > > Hope this helps.
> > >
> > > Regards,
> > > Remi
> > >
> > >
> > > > On 11 jul. 2015, at 12:57, Sonali Jadhav <son...@servercentralen.se
> <javascript:;>
> > <javascript:;>
> > > <javascript:;>> wrote:
> > > >
> > > > Hi I am using 4.5.1. That's why I am upgrading all xenservers to 6.5.
> > > >
> > > > I didn't knew that I can migrate vm from xenservers host itself. I
> > > thought that would make cloudstack database inconsistent, since
> migration
> > > is not initiated from cloudstack.
> > > >
> > > > And like I said before,  those vms have compute offering which was
> > > > deleted,  but I "undeleted" them by setting status to "active" in
> > > > disk_offering table
> > > >
> > > > Sent from my Sony Xperia(tm) smartphone
> > > >
> > > >
> > > > ---- Remi Bergsma wrote ----
> > > >
> > > > Hi Sonali,
> > > >
> > > > What version of CloudStack do you use? We can then look at the source
> > at
> > > line 292 of DeploymentPlanningManagerImpl.java If I look at master, it
> > > indeed tries to do something with the compute offerings. Could you also
> > > post its specs (print the result of the select query where you set the
> > > field active). We might be able to tell what's wrong with it.
> > > >
> > > > As plan B, assuming you use a recent CloudStack version, you can use
> > > > 'xe vm-migrate' to migrate VMs directly off of the hypervisor from
> the
> > > > command line on the XenServer. Like this: xe vm-migrate
> vm=i-12-345-VM
> > > > host=xen3
> > > >
> > > > Recent versions of CloudStack will properly pick this up. When the
> VMS
> > > are gone, the hypervisor will enter maintenance mode just fine.
> > > >
> > > > Regards,
> > > > Remi
> > > >
> > > >
> > > >> On 11 jul. 2015, at 09:42, Sonali Jadhav <son...@servercentralen.se
> <javascript:;>
> > <javascript:;>
> > > <javascript:;>> wrote:
> > > >>
> > > >> Can anyone help me please?
> > > >>
> > > >> When I add xenserver host in maintenance, there are 3 VMs which are
> > not
> > > getting migrated to another host in cluster.
> > > >> Other VMs were moved, but not these three. They both had computer
> > > offering which was removed. But I undeleted those computer offerings,
> > like
> > > Andrija Panic suggested, changed their state to Active in
> > > cloud.disk_offering table.
> > > >>
> > > >> But still I am seeing following errors,  I am totally stuck, since I
> > > have cluster of 4 xenservers, And I have upgraded 3 xenservers to 6.5,
> > > except this one. And I can't reboot it for upgrade without moving these
> > > instances to another host.
> > > >>
> > > >> [o.a.c.f.j.i.AsyncJobManagerImpl] (HA-Worker-2:ctx-68459b74 work-73)
> > > >> Sync job-4090 execution on object VmWorkJobQueue.32
> > > >> 2015-07-09 14:27:00,908 INFO  [c.c.h.HighAvailabilityManagerImpl]
> > > >> (HA-Worker-3:ctx-6ee7e62f work-74) Processing
> > > >> HAWork[74-Migration-34-Running-Scheduled]
> > > >> 2015-07-09 14:27:01,147 WARN  [o.a.c.f.j.AsyncJobExecutionContext]
> > > >> (HA-Worker-3:ctx-6ee7e62f work-74) Job is executed without a
> context,
> > > >> setup psudo job for the executing thread
> > > >> 2015-07-09 14:27:01,162 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > >> (HA-Worker-3:ctx-6ee7e62f work-74) Sync job-4091 execution on object
> > > >> VmWorkJobQueue.34
> > > >> 2015-07-09 14:27:01,191 DEBUG [c.c.r.ResourceManagerImpl]
> > > >> (API-Job-Executor-107:ctx-4f5d495d job-4088 ctx-5921f0d2) Sent
> > > >> resource event EVENT_PREPARE_MAINTENANCE_AFTER to listener
> > > >> CapacityManagerImpl
> > > >> 2015-07-09 14:27:01,206 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > >> (API-Job-Executor-107:ctx-4f5d495d job-4088 ctx-5921f0d2) Complete
> > > >> async job-4088, jobStatus: SUCCEEDED, resultCode: 0, result:
> > > >>
> org.apache.cloudstack.api.response.HostResponse/host/{"id":"c3c78959-
> > > >>
> 6387-4cc9-8f59-23d44d2257a8","name":"SeSolXS03","state":"Up","disconn
> > > >>
> ected":"2015-07-03T12:13:06+0200","type":"Routing","ipaddress":"172.1
> > > >>
> 6.5.188","zoneid":"1baf17c9-8325-4fa6-bffc-e502a33b578b","zonename":"
> > > >>
> Solna","podid":"07de38ee-b63f-4285-819c-8abbdc392ab0","podname":"SeSo
> > > >>
> lRack1","version":"4.5.1","hypervisor":"XenServer","cpusockets":2,"cp
> > > >>
> unumber":24,"cpuspeed":2400,"cpuallocated":"0%","cpuused":"0%","cpuwi
> > > >>
> thoverprovisioning":"57600.0","networkkbsread":0,"networkkbswrite":0,
> > > >>
> "memorytotal":95574311424,"memoryallocated":0,"memoryused":13790400,"
> > > >> capabilities":"xen-3.0-x86_64 , xen-3.0-x86_32p , hvm-3.0-x86_32 ,
> > > >> hvm-3.0-x86_32p ,
> > > >>
> hvm-3.0-x86_64","lastpinged":"1970-01-17T06:39:19+0100","managementse
> > > >>
> rverid":59778234354585,"clusterid":"fe15e305-5c11-4785-a13d-e4581e23f
> > > >>
> 5e7","clustername":"SeSolCluster1","clustertype":"CloudManaged","islo
> > > >>
> calstorageactive":false,"created":"2015-01-27T10:55:13+0100","events"
> > > >> :"ManagementServerDown; AgentConnected; Ping; Remove;
> > > >> AgentDisconnected; HostDown; ShutdownRequested; StartAgentRebalance;
> > > >>
> PingTimeout","resourcestate":"PrepareForMaintenance","hypervisorversi
> > > >>
> on":"6.2.0","hahost":false,"jobid":"7ad72023-a16f-4abf-84a3-83dd0e9f6
> > > >> bfd","jobstatus":0}
> > > >> 2015-07-09 14:27:01,208 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > >> (API-Job-Executor-107:ctx-4f5d495d job-4088 ctx-5921f0d2) Publish
> > > >> async job-4088 complete on message bus
> > > >> 2015-07-09 14:27:01,208 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > >> (API-Job-Executor-107:ctx-4f5d495d job-4088 ctx-5921f0d2) Wake up
> > > >> jobs related to job-4088
> > > >> 2015-07-09 14:27:01,209 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > >> (API-Job-Executor-107:ctx-4f5d495d job-4088 ctx-5921f0d2) Update db
> > > >> status for job-4088
> > > >> 2015-07-09 14:27:01,211 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > >> (API-Job-Executor-107:ctx-4f5d495d job-4088 ctx-5921f0d2) Wake up
> > > >> jobs joined with job-4088 and disjoin all subjobs created from job-
> > > >> 4088
> > > >> 2015-07-09 14:27:01,386 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > >> (API-Job-Executor-107:ctx-4f5d495d job-4088) Done executing
> > > >>
> org.apache.cloudstack.api.command.admin.host.PrepareForMaintenanceCmd
> > > >> for job-4088
> > > >> 2015-07-09 14:27:01,389 INFO  [o.a.c.f.j.i.AsyncJobMonitor]
> > > >> (API-Job-Executor-107:ctx-4f5d495d job-4088) Remove job-4088 from
> job
> > > >> monitoring
> > > >> 2015-07-09 14:27:02,755 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > >> (AsyncJobMgr-Heartbeat-1:ctx-1c99f7cd) Execute sync-queue item:
> > > >> SyncQueueItemVO {id:2326, queueId: 251, contentType: AsyncJob,
> > > >> contentId: 4091, lastProcessMsid: 59778234354585, lastprocessNumber:
> > > >> 193, lastProcessTime: Thu Jul 09 14:27:02 CEST 2015, created: Thu
> Jul
> > > >> 09 14:27:01 CEST 2015}
> > > >> 2015-07-09 14:27:02,758 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > >> (AsyncJobMgr-Heartbeat-1:ctx-1c99f7cd) Schedule queued job-4091
> > > >> 2015-07-09 14:27:02,810 INFO  [o.a.c.f.j.i.AsyncJobMonitor]
> > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Add job-4091
> > > >> into job monitoring
> > > >> 2015-07-09 14:27:02,819 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Executing
> > > >> AsyncJobVO {id:4091, userId: 1, accountId: 1, instanceType: null,
> > > >> instanceId: null, cmd: com.cloud.vm.VmWorkMigrateAway, cmdInfo:
> > > >>
> rO0ABXNyAB5jb20uY2xvdWQudm0uVm1Xb3JrTWlncmF0ZUF3YXmt4MX4jtcEmwIAAUoAC
> > > >>
> XNyY0hvc3RJZHhyABNjb20uY2xvdWQudm0uVm1Xb3Jrn5m2VvAlZ2sCAARKAAlhY2NvdW
> > > >>
> 50SWRKAAZ1c2VySWRKAAR2bUlkTAALaGFuZGxlck5hbWV0ABJMamF2YS9sYW5nL1N0cml
> > > >>
> uZzt4cAAAAAAAAAABAAAAAAAAAAEAAAAAAAAAInQAGVZpcnR1YWxNYWNoaW5lTWFuYWdl
> > > >> ckltcGwAAAAAAAAABQ, cmdVersion: 0, status: IN_PROGRESS,
> > > >> processStatus: 0, resultCode: 0, result: null, initMsid:
> > > >> 59778234354585, completeMsid: null, lastUpdated: null, lastPolled:
> > > >> null, created: Thu Jul 09 14:27:01 CEST 2015}
> > > >> 2015-07-09 14:27:02,820 DEBUG [c.c.v.VmWorkJobDispatcher]
> > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Run VM work
> > > >> job: com.cloud.vm.VmWorkMigrateAway for VM 34, job origin: 3573
> > > >> 2015-07-09 14:27:02,822 DEBUG [c.c.v.VmWorkJobHandlerProxy]
> > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091 ctx-744a984e)
> > > >> Execute VM work job:
> > > >>
> com.cloud.vm.VmWorkMigrateAway{"srcHostId":5,"userId":1,"accountId":1
> > > >> ,"vmId":34,"handlerName":"VirtualMachineManagerImpl"}
> > > >> 2015-07-09 14:27:02,852 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091 ctx-744a984e)
> > > >> Deploy avoids pods: [], clusters: [], hosts: [5]
> > > >> 2015-07-09 14:27:02,855 ERROR [c.c.v.VmWorkJobHandlerProxy]
> > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091 ctx-744a984e)
> > > >> Invocation exception, caused by: java.lang.NullPointerException
> > > >> 2015-07-09 14:27:02,855 INFO  [c.c.v.VmWorkJobHandlerProxy]
> > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091 ctx-744a984e)
> > > >> Rethrow exception java.lang.NullPointerException
> > > >> 2015-07-09 14:27:02,855 DEBUG [c.c.v.VmWorkJobDispatcher]
> > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Done with run
> > > >> of VM work job: com.cloud.vm.VmWorkMigrateAway for VM 34, job
> origin:
> > > >> 3573
> > > >> 2015-07-09 14:27:02,855 ERROR [c.c.v.VmWorkJobDispatcher]
> > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Unable to
> > > complete AsyncJobVO {id:4091, userId: 1, accountId: 1, instanceType:
> > null,
> > > instanceId: null, cmd: com.cloud.vm.VmWorkMigrateAway, cmdInfo:
> > >
> >
> rO0ABXNyAB5jb20uY2xvdWQudm0uVm1Xb3JrTWlncmF0ZUF3YXmt4MX4jtcEmwIAAUoACXNyY0hvc3RJZHhyABNjb20uY2xvdWQudm0uVm1Xb3Jrn5m2VvAlZ2sCAARKAAlhY2NvdW50SWRKAAZ1c2VySWRKAAR2bUlkTAALaGFuZGxlck5hbWV0ABJMamF2YS9sYW5nL1N0cmluZzt4cAAAAAAAAAABAAAAAAAAAAEAAAAAAAAAInQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAABQ,
> > > cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0,
> > > result: null, initMsid: 59778234354585, completeMsid: null,
> lastUpdated:
> > > null, lastPolled: null, created: Thu Jul 09 14:27:01 CEST 2015}, job
> > > origin:3573 java.lang.NullPointerException
> > > >>       at
> > >
> >
> com.cloud.deploy.DeploymentPlanningManagerImpl.planDeployment(DeploymentPlanningManagerImpl.java:292)
> > > >>       at
> > >
> >
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:2376)
> > > >>       at
> > >
> >
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:4517)
> > > >>       at sun.reflect.GeneratedMethodAccessor563.invoke(Unknown
> Source)
> > > >>       at
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > >>       at java.lang.reflect.Method.invoke(Method.java:606)
> > > >>       at
> > >
> >
> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
> > > >>       at
> > >
> >
> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4636)
> > > >>       at
> > > com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)
> > > >>       at
> > >
> >
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
> > > >>       at
> > >
> >
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
> > > >>       at
> > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > > >>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> > > >>       at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > > >>       at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > > >>       at java.lang.Thread.run(Thread.java:744)
> > > >> 2015-07-09 14:27:02,863 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Complete async
> > > >> job-4091, jobStatus: FAILED, resultCode: 0, result:
> > > >>
> rO0ABXNyAB5qYXZhLmxhbmcuTnVsbFBvaW50ZXJFeGNlcHRpb25HpaGO_zHhuAIAAHhyA
> > > >>
> BpqYXZhLmxhbmcuUnVudGltZUV4Y2VwdGlvbp5fBkcKNIPlAgAAeHIAE2phdmEubGFuZy
> > > >>
> 5FeGNlcHRpb27Q_R8-GjscxAIAAHhyABNqYXZhLmxhbmcuVGhyb3dhYmxl1cY1Jzl3uMs
> > > >>
> DAARMAAVjYXVzZXQAFUxqYXZhL2xhbmcvVGhyb3dhYmxlO0wADWRldGFpbE1lc3NhZ2V0
> > > >>
> ABJMamF2YS9sYW5nL1N0cmluZztbAApzdGFja1RyYWNldAAeW0xqYXZhL2xhbmcvU3RhY
> > > >>
> 2tUcmFjZUVsZW1lbnQ7TAAUc3VwcHJlc3NlZEV4Y2VwdGlvbnN0ABBMamF2YS91dGlsL0
> > > >>
> xpc3Q7eHBxAH4ACHB1cgAeW0xqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnQ7AkYqPDz
> > > >>
> 9IjkCAAB4cAAAABVzcgAbamF2YS5sYW5nLlN0YWNrVHJhY2VFbGVtZW50YQnFmiY23YUC
> > > >>
> AARJAApsaW5lTnVtYmVyTAAOZGVjbGFyaW5nQ2xhc3NxAH4ABUwACGZpbGVOYW1lcQB-A
> > > >>
> AVMAAptZXRob2ROYW1lcQB-AAV4cAAAASR0AC5jb20uY2xvdWQuZGVwbG95LkRlcGxveW
> > > >>
> 1lbnRQbGFubmluZ01hbmFnZXJJbXBsdAAiRGVwbG95bWVudFBsYW5uaW5nTWFuYWdlckl
> > > >>
> tcGwuamF2YXQADnBsYW5EZXBsb3ltZW50c3EAfgALAAAJSHQAJmNvbS5jbG91ZC52bS5W
> > > >>
> aXJ0dWFsTWFjaGluZU1hbmFnZXJJbXBsdAAeVmlydHVhbE1hY2hpbmVNYW5hZ2VySW1wb
> > > >>
> C5qYXZhdAAWb3JjaGVzdHJhdGVNaWdyYXRlQXdheXNxAH4ACwAAEaVxAH4AEXEAfgAScQ
> > > >>
> B-ABNzcQB-AAv_____dAAmc3VuLnJlZmxlY3QuR2VuZXJhdGVkTWV0aG9kQWNjZXNzb3I
> > > >>
> 1NjNwdAAGaW52b2tlc3EAfgALAAAAK3QAKHN1bi5yZWZsZWN0LkRlbGVnYXRpbmdNZXRo
> > > >>
> b2RBY2Nlc3NvckltcGx0ACFEZWxlZ2F0aW5nTWV0aG9kQWNjZXNzb3JJbXBsLmphdmFxA
> > > >>
> H4AF3NxAH4ACwAAAl50ABhqYXZhLmxhbmcucmVmbGVjdC5NZXRob2R0AAtNZXRob2Quam
> > > >>
> F2YXEAfgAXc3EAfgALAAAAa3QAImNvbS5jbG91ZC52bS5WbVdvcmtKb2JIYW5kbGVyUHJ
> > > >>
> veHl0ABpWbVdvcmtKb2JIYW5kbGVyUHJveHkuamF2YXQAD2hhbmRsZVZtV29ya0pvYnNx
> > > >>
> AH4ACwAAEhxxAH4AEXEAfgAScQB-ACFzcQB-AAsAAABndAAgY29tLmNsb3VkLnZtLlZtV
> > > >>
> 29ya0pvYkRpc3BhdGNoZXJ0ABhWbVdvcmtKb2JEaXNwYXRjaGVyLmphdmF0AAZydW5Kb2
> > > >>
> JzcQB-AAsAAAIZdAA_b3JnLmFwYWNoZS5jbG91ZHN0YWNrLmZyYW1ld29yay5qb2JzLml
> > > >>
> tcGwuQXN5bmNKb2JNYW5hZ2VySW1wbCQ1dAAYQXN5bmNKb2JNYW5hZ2VySW1wbC5qYXZh
> > > >>
> dAAMcnVuSW5Db250ZXh0c3EAfgALAAAAMXQAPm9yZy5hcGFjaGUuY2xvdWRzdGFjay5tY
> > > >>
> W5hZ2VkLmNvbnRleHQuTWFuYWdlZENvbnRleHRSdW5uYWJsZSQxdAAbTWFuYWdlZENvbn
> > > >>
> RleHRSdW5uYWJsZS5qYXZhdAADcnVuc3EAfgALAAAAOHQAQm9yZy5hcGFjaGUuY2xvdWR
> > > >>
> zdGFjay5tYW5hZ2VkLmNvbnRleHQuaW1wbC5EZWZhdWx0TWFuYWdlZENvbnRleHQkMXQA
> > > >>
> GkRlZmF1bHRNYW5hZ2VkQ29udGV4dC5qYXZhdAAEY2FsbHNxAH4ACwAAAGd0AEBvcmcuY
> > > >>
> XBhY2hlLmNsb3Vkc3RhY2subWFuYWdlZC5jb250ZXh0LmltcGwuRGVmYXVsdE1hbmFnZW
> > > >>
> RDb250ZXh0cQB-ADF0AA9jYWxsV2l0aENvbnRleHRzcQB-AAsAAAA1cQB-ADRxAH4AMXQ
> > > >>
> ADnJ1bldpdGhDb250ZXh0c3EAfgALAAAALnQAPG9yZy5hcGFjaGUuY2xvdWRzdGFjay5t
> > > >>
> YW5hZ2VkLmNvbnRleHQuTWFuYWdlZENvbnRleHRSdW5uYWJsZXEAfgAtcQB-AC5zcQB-A
> > > >>
> AsAAAHucQB-AChxAH4AKXEAfgAuc3EAfgALAAAB13QALmphdmEudXRpbC5jb25jdXJyZW
> > > >>
> 50LkV4ZWN1dG9ycyRSdW5uYWJsZUFkYXB0ZXJ0AA5FeGVjdXRvcnMuamF2YXEAfgAyc3E
> > > >>
> AfgALAAABBnQAH2phdmEudXRpbC5jb25jdXJyZW50LkZ1dHVyZVRhc2t0AA9GdXR1cmVU
> > > >>
> YXNrLmphdmFxAH4ALnNxAH4ACwAABHl0ACdqYXZhLnV0aWwuY29uY3VycmVudC5UaHJlY
> > > >>
> WRQb29sRXhlY3V0b3J0ABdUaHJlYWRQb29sRXhlY3V0b3IuamF2YXQACXJ1bldvcmtlcn
> > > >>
> NxAH4ACwAAAmd0AC5qYXZhLnV0aWwuY29uY3VycmVudC5UaHJlYWRQb29sRXhlY3V0b3I
> > > >>
> kV29ya2VycQB-AENxAH4ALnNxAH4ACwAAAuh0ABBqYXZhLmxhbmcuVGhyZWFkdAALVGhy
> > > >>
> ZWFkLmphdmFxAH4ALnNyACZqYXZhLnV0aWwuQ29sbGVjdGlvbnMkVW5tb2RpZmlhYmxlT
> > > >>
> GlzdPwPJTG17I4QAgABTAAEbGlzdHEAfgAHeHIALGphdmEudXRpbC5Db2xsZWN0aW9ucy
> > > >>
> RVbm1vZGlmaWFibGVDb2xsZWN0aW9uGUIAgMte9x4CAAFMAAFjdAAWTGphdmEvdXRpbC9
> > > >>
> Db2xsZWN0aW9uO3hwc3IAE2phdmEudXRpbC5BcnJheUxpc3R4gdIdmcdhnQMAAUkABHNp
> > > >> emV4cAAAAAB3BAAAAAB4cQB-AE94
> > > >> 2015-07-09 14:27:02,866 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Publish async
> > > >> job-4091 complete on message bus
> > > >> 2015-07-09 14:27:02,866 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Wake up jobs
> > > >> related to job-4091
> > > >> 2015-07-09 14:27:02,866 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Update db
> > > >> status for job-4091
> > > >> 2015-07-09 14:27:02,868 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Wake up jobs
> > > >> joined with job-4091 and disjoin all subjobs created from job- 4091
> > > >> 2015-07-09 14:27:02,918 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Done executing
> > > >> com.cloud.vm.VmWorkMigrateAway for job-4091
> > > >> 2015-07-09 14:27:02,926 INFO  [o.a.c.f.j.i.AsyncJobMonitor]
> > > >> (Work-Job-Executor-65:ctx-82ed9c8f job-3573/job-4091) Remove
> job-4091
> > > >> from job monitoring
> > > >> 2015-07-09 14:27:02,979 WARN  [c.c.h.HighAvailabilityManagerImpl]
> > > >> (HA-Worker-3:ctx-6ee7e62f work-74) Encountered unhandled exception
> > > during HA process, reschedule retry java.lang.NullPointerException
> > > >>       at
> > >
> >
> com.cloud.deploy.DeploymentPlanningManagerImpl.planDeployment(DeploymentPlanningManagerImpl.java:292)
> > > >>       at
> > >
> >
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:2376)
> > > >>       at
> > >
> >
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:4517)
> > > >>       at sun.reflect.GeneratedMethodAccessor563.invoke(Unknown
> Source)
> > > >>       at
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > >>       at java.lang.reflect.Method.invoke(Method.java:606)
> > > >>       at
> > >
> >
> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
> > > >>       at
> > >
> >
> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4636)
> > > >>       at
> > > com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)
> > > >>       at
> > >
> >
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
> > > >>       at
> > >
> >
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
> > > >>       at
> > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > > >>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> > > >>       at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > > >>       at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > > >>       at java.lang.Thread.run(Thread.java:744)
> > > >> 2015-07-09 14:27:02,980 INFO  [c.c.h.HighAvailabilityManagerImpl]
> > > >> (HA-Worker-3:ctx-6ee7e62f work-74) Rescheduling
> > > >> HAWork[74-Migration-34-Running-Migrating] to try again at Thu Jul 09
> > > >> 14:37:16 CEST 2015
> > > >> 2015-07-09 14:27:03,008 DEBUG [c.c.a.m.AgentManagerImpl]
> > > >> (AgentManager-Handler-14:null) SeqA 11-89048: Processing Seq
> > > >> 11-89048:  { Cmd , MgmtId: -1, via: 11, Ver: v1, Flags: 11,
> > > >>
> [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":8
> > > >> 0,"_loadInfo":"{\n  \"connections\": []\n}","wait":0}}] }
> > > >> 2015-07-09 14:27:03,027 WARN  [c.c.h.HighAvailabilityManagerImpl]
> > > >> (HA-Worker-2:ctx-68459b74 work-73) Encountered unhandled exception
> > > during HA process, reschedule retry java.lang.NullPointerException
> > > >>       at
> > >
> >
> com.cloud.deploy.DeploymentPlanningManagerImpl.planDeployment(DeploymentPlanningManagerImpl.java:292)
> > > >>       at
> > >
> >
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:2376)
> > > >>       at
> > >
> >
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:4517)
> > > >>       at sun.reflect.GeneratedMethodAccessor299.invoke(Unknown
> Source)
> > > >>       at
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > >>       at java.lang.reflect.Method.invoke(Method.java:606)
> > > >>       at
> > >
> >
> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
> > > >>       at
> > >
> >
> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4636)
> > > >>       at
> > > com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)
> > > >>       at
> > >
> >
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
> > > >>       at
> > >
> >
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
> > > >>       at
> > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > > >>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> > > >>       at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > > >>       at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > > >>       at java.lang.Thread.run(Thread.java:744)
> > > >> 2015-07-09 14:27:03,030 INFO  [c.c.h.HighAvailabilityManagerImpl]
> > > >> (HA-Worker-2:ctx-68459b74 work-73) Rescheduling
> > > >> HAWork[73-Migration-32-Running-Migrating] to try again at Thu Jul 09
> > > >> 14:37:16 CEST 2015
> > > >> 2015-07-09 14:27:03,075 WARN  [c.c.h.HighAvailabilityManagerImpl]
> > > >> (HA-Worker-1:ctx-105d205a work-72) Encountered unhandled exception
> > > during HA process, reschedule retry java.lang.NullPointerException
> > > >>       at
> > >
> >
> com.cloud.deploy.DeploymentPlanningManagerImpl.planDeployment(DeploymentPlanningManagerImpl.java:292)
> > > >>       at
> > >
> >
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:2376)
> > > >>       at
> > >
> >
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrateAway(VirtualMachineManagerImpl.java:4517)
> > > >>       at sun.reflect.GeneratedMethodAccessor299.invoke(Unknown
> Source)
> > > >>       at
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > >>       at java.lang.reflect.Method.invoke(Method.java:606)
> > > >>       at
> > >
> >
> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
> > > >>       at
> > >
> >
> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4636)
> > > >>       at
> > > com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)
> > > >>       at
> > >
> >
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> > > >>       at
> > >
> >
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
> > > >>       at
> > >
> >
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:494)
> > > >>       at
> > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > > >>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> > > >>       at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > > >>       at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > > >>       at java.lang.Thread.run(Thread.java:744)
> > > >> 2015-07-09 14:27:03,076 INFO  [c.c.h.HighAvailabilityManagerImpl]
> > > >> (HA-Worker-1:ctx-105d205a work-72) Rescheduling
> > > >> HAWork[72-Migration-31-Running-Migrating] to try again at Thu Jul 09
> > > >> 14:37:16 CEST 2015
> > > >> 2015-07-09 14:27:03,165 DEBUG [c.c.a.m.AgentManagerImpl]
> > > >> (AgentManager-Handler-14:null) SeqA 11-890
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >> /Sonali
> > > >>
> > > >> -----Original Message-----
> > > >> From: Sonali Jadhav [mailto:son...@servercentralen.se
> <javascript:;> <javascript:;>
> > <javascript:;>]
> > > >> Sent: Thursday, July 9, 2015 2:45 PM
> > > >> To: users@cloudstack.apache.org <javascript:;> <javascript:;>
> <javascript:;>
> > > >> Subject: RE: VMs not migrated after putting Xenserver host in
> > > >> maintenance mode
> > > >>
> > > >> Ignore this, I found problem.
> > > >>
> > > >> Though one question remains, from ACS If I try to migrate instance
> to
> > > another host, it doesn't show upgraded host in list. Why is that ?
> > > >>
> > > >> /Sonali
> > > >>
> > > >> -----Original Message-----
> > > >> From: Sonali Jadhav [mailto:son...@servercentralen.se
> <javascript:;> <javascript:;>
> > <javascript:;>]
> > > >> Sent: Thursday, July 9, 2015 2:00 PM
> > > >> To: users@cloudstack.apache.org <javascript:;> <javascript:;>
> > > >> Subject: VMs not migrated after putting Xenserver host in
> maintenance
> > > >> mode
> > > >>
> > > >> Hi,
> > > >>
> > > >> I am upgrading my xenserver from 6.2 to 6.5. I have cluster of 4
> > hosts.
> > > I have managed to upgrade two of the hosts. I added 3d host in
> > ma

Reply via email to