You mean the system VMs for Zone 1 are running? Are they showing as connected 
in the DB?

If they are, can you stop these VMs and see if they start okay.
________________________________________
From: Mohamed Infaz [infaz...@cse.mrt.ac.lk]
Sent: Tuesday, December 30, 2014 21:26
To: us...@cloudstack.apache.org
Cc: dev@cloudstack.apache.org
Subject: Re: Unable to start VM due to concurrent operation

systemvm64template-2014-01-14-master-kvm.qcow2.bz2 This is the template i used.


On 31 December 2014 at 07:49, Mohamed Infaz <infaz...@cse.mrt.ac.lk> wrote:

> Hi Somesh,
>
> Thank you for the reply. Well both of my system vm's are running and it is
> downloading the iso image. And i was able to ping them. The system vm
> template i am using is for the version 4.3 cloudstack.
>
> Thank you.
>
> On 30 December 2014 at 23:59, Somesh Naidu <somesh.na...@citrix.com>
> wrote:
>
>> Mohamed,
>>
>> The log snippet you have shared doesn't seem to be relevant to the
>> concurrent operation exception (can't see that exception stack).
>>
>> The message that did catch my attention though is,
>> -
>> 2014-12-31 05:25:07,972 DEBUG [c.c.s.s.SecondaryStorageManagerImpl]
>> (secstorage-1:ctx-463d2f57) System vm template is not ready at data center
>> 1, wait until it is ready to launch secondary storage vm
>> 2014-12-31 05:25:07,972 DEBUG [c.c.s.s.SecondaryStorageManagerImpl]
>> (secstorage-1:ctx-463d2f57) Zone 1 is not ready to launch secondary storage
>> VM yet
>> -
>>
>> That's pointing to missing system VM template. Please verify that you
>> have correctly provisioned the secondary storage with the correct system VM
>> template (as per the docs).
>>
>> -Somesh
>>
>> -----Original Message-----
>> From: Mohamed Infaz [mailto:infaz...@cse.mrt.ac.lk]
>> Sent: Tuesday, December 30, 2014 11:03 AM
>> To: us...@cloudstack.apache.org
>> Cc: dev@cloudstack.apache.org
>> Subject: Unable to start VM due to concurrent operation
>>
>> Hi All,
>>
>> I have successfully deployed cloudstack 4.3 with two hosts and the setup
>> runs the management server on another physical host. I had issues with
>> downloading an ISO but finally i was able to download an ISO image. When i
>> want to create an instance i get the following error message saying
>> "Unable
>> to start VM due to concurrent operation" i did some searching on the topic
>> and it said version 4.3 system template solved the issue. These are some
>> of
>> the MS logs that i collected what could be the issue?
>>
>> 2014-12-31 05:25:07,972 DEBUG [c.c.s.s.SecondaryStorageManagerImpl]
>> (secstorage-1:ctx-463d2f57) System vm template is not ready at data center
>> 1, wait until it is ready to launch secondary storage vm
>> 2014-12-31 05:25:07,972 DEBUG [c.c.s.s.SecondaryStorageManagerImpl]
>> (secstorage-1:ctx-463d2f57) Zone 1 is not ready to launch secondary
>> storage
>> VM yet
>> 2014-12-31 05:25:12,425 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentManager-Handler-9:null) Ping from 4
>> 2014-12-31 05:25:12,834 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentManager-Handler-5:null) SeqA 3-91695: Processing Seq 3-91695:  { Cmd
>> , MgmtId: -1, via: 3, Ver: v1, Flags: 11,
>>
>> [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":1,"_loadInfo":"{\n
>> \"connections\": []\n}","wait":0}}] }
>> 2014-12-31 05:25:12,920 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentManager-Handler-5:null) SeqA 3-91695: Sending Seq 3-91695:  { Ans: ,
>> MgmtId: 248795600505608, via: 3, Ver: v1, Flags: 100010,
>> [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] }
>> 2014-12-31 05:25:15,239 DEBUG [c.c.n.ExternalDeviceUsageManagerImpl]
>> (ExternalNetworkMonitor-1:ctx-998a86df) External devices stats collector
>> is
>> running...
>> 2014-12-31 05:25:15,362 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
>> (RouterMonitor-1:ctx-dfdc73ce) Found 0 running routers.
>> 2014-12-31 05:25:15,366 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
>> (RouterStatusMonitor-1:ctx-79d58896) Found 0 routers to update status.
>> 2014-12-31 05:25:15,367 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
>> (RouterStatusMonitor-1:ctx-79d58896) Found 0 networks to update RvR
>> status.
>> 2014-12-31 05:25:15,393 DEBUG [c.c.s.s.SnapshotSchedulerImpl]
>> (SnapshotPollTask:ctx-ae761f6b) Snapshot scheduler.poll is being called at
>> 2014-12-30 23:55:15 GMT
>> 2014-12-31 05:25:15,393 DEBUG [c.c.s.s.SnapshotSchedulerImpl]
>> (SnapshotPollTask:ctx-ae761f6b) Got 0 snapshots to be executed at
>> 2014-12-30 23:55:15 GMT
>> 2014-12-31 05:25:18,535 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentManager-Handler-14:null) Ping from 3
>> 2014-12-31 05:25:18,698 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentManager-Handler-10:null) Ping from 2
>> 2014-12-31 05:25:19,834 INFO  [c.c.a.m.AgentManagerImpl]
>> (AgentMonitor-1:ctx-f10d32f0) Found the following agents behind on ping:
>> [1]
>> 2014-12-31 05:25:19,835 DEBUG [c.c.h.Status] (AgentMonitor-1:ctx-f10d32f0)
>> Ping timeout for host 1, do invstigation
>> 2014-12-31 05:25:19,837 INFO  [c.c.a.m.AgentManagerImpl]
>> (AgentTaskPool-14:ctx-43a85937) Investigating why host 1 has disconnected
>> with event PingTimeout
>> 2014-12-31 05:25:19,837 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentTaskPool-14:ctx-43a85937) checking if agent (1) is alive
>> 2014-12-31 05:25:19,839 DEBUG [c.c.a.t.Request]
>> (AgentTaskPool-14:ctx-43a85937) Seq 1-449186863: Sending  { Cmd , MgmtId:
>> 248795600505608, via: 1(virtualops-h4), Ver: v1, Flags: 100011,
>> [{"com.cloud.agent.api.CheckHealthCommand":{"wait":50}}] }
>> 2014-12-31 05:25:19,844 DEBUG [c.c.a.t.Request]
>> (AgentManager-Handler-7:null) Seq 1-449186863: Processing:  { Ans: ,
>> MgmtId: 248795600505608, via: 1, Ver: v1, Flags: 10,
>>
>> [{"com.cloud.agent.api.CheckHealthAnswer":{"result":true,"details":"resource
>> is alive","wait":0}}] }
>> 2014-12-31 05:25:19,844 DEBUG [c.c.a.t.Request]
>> (AgentTaskPool-14:ctx-43a85937) Seq 1-449186863: Received:  { Ans: ,
>> MgmtId: 248795600505608, via: 1, Ver: v1, Flags: 10, { CheckHealthAnswer
>> } }
>> 2014-12-31 05:25:19,844 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentTaskPool-14:ctx-43a85937) Details from executing class
>> com.cloud.agent.api.CheckHealthCommand: resource is alive
>> 2014-12-31 05:25:19,844 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentTaskPool-14:ctx-43a85937) agent (1) responded to checkHeathCommand,
>> reporting that agent is Up
>> 2014-12-31 05:25:19,844 INFO  [c.c.a.m.AgentManagerImpl]
>> (AgentTaskPool-14:ctx-43a85937) The state determined is Up
>> 2014-12-31 05:25:19,844 INFO  [c.c.a.m.AgentManagerImpl]
>> (AgentTaskPool-14:ctx-43a85937) Agent is determined to be up and running
>> 2014-12-31 05:25:19,844 DEBUG [c.c.h.Status]
>> (AgentTaskPool-14:ctx-43a85937) Transition:[Resource state = Enabled,
>> Agent
>> event = Ping, Host id = 1, name = virtualops-h4]
>> 2014-12-31 05:25:19,929 DEBUG [c.c.h.Status]
>> (AgentTaskPool-14:ctx-43a85937) Agent status update: [id = 1; name =
>> virtualops-h4; old status = Up; event = Ping; new status = Up; old update
>> count = 34; new update count = 35]
>> 2014-12-31 05:25:22,873 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentManager-Handler-2:null) SeqA 3-91697: Processing Seq 3-91697:  { Cmd
>> , MgmtId: -1, via: 3, Ver: v1, Flags: 11,
>>
>> [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":1,"_loadInfo":"{\n
>> \"connections\": []\n}","wait":0}}] }
>> 2014-12-31 05:25:22,955 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentManager-Handler-2:null) SeqA 3-91697: Sending Seq 3-91697:  { Ans: ,
>> MgmtId: 248795600505608, via: 3, Ver: v1, Flags: 100010,
>> [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] }
>> 2014-12-31 05:25:25,002 DEBUG [c.c.s.StatsCollector]
>> (StatsCollector-1:ctx-2acfc
>>
>> Thank you.
>>
>
>

Reply via email to