No problem, I'm pleased that you're sorted.

Kind regards,

Paul Angus

paul.an...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-----Original Message-----
From: Jevgeni Zolotarjov [mailto:j.zolotar...@gmail.com] 
Sent: 09 February 2018 16:34
To: users@cloudstack.apache.org
Subject: Re: cloudstack-management fails to start after upgrade 4.10 -> 4.11

destroyed virtual router

it got recreated.
On NOW. My VMs start!!!

All IPs got changed, but I can manage with that.

Thank you for your support

On Fri, Feb 9, 2018 at 6:21 PM, Paul Angus <paul.an...@shapeblue.com> wrote:

> Have you done the same for the virtual routers?
>
> I'll look at your log in the meantime....
>
>
> Kind regards,
>
> Paul Angus
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>
>
>
>
> -----Original Message-----
> From: Jevgeni Zolotarjov [mailto:j.zolotar...@gmail.com]
> Sent: 09 February 2018 16:19
> To: users@cloudstack.apache.org
> Subject: Re: cloudstack-management fails to start after upgrade 4.10 ->
> 4.11
>
> I destroyed system VMs and they got recreated automatically.
> They are running. I can verify that by
> virsh list --all
>
> and
> I can see their console and it sugests that it is Cloudstack 4.11 systemVM
>
> BUT
> it didn't solve the problem. None of my own VM is listed by "virsh list
> -all". They do not start.
> I tried to create new VM, it does not start either, due to to the same
> problem - insufficient capacity
>
> On Fri, Feb 9, 2018 at 6:02 PM, Daan Hoogland <daan.hoogl...@gmail.com>
> wrote:
>
> > listen to Paul, not to me.
> > He's an operator i'm just impatient
> >
> > On Fri, Feb 9, 2018 at 5:00 PM, Paul Angus <paul.an...@shapeblue.com>
> > wrote:
> >
> > > After upgrading the code of the mgmt. server you need upgrade your
> > > system VMs from the old template to ones using the new templates.
> > >
> > > This needs to be done for the SSVM, CPVM and all of your virtual
> routers.
> > >
> > > For the SSVM & CPVM you do this by destroying them and CloudStack
> > > will recreate them with the new template.
> > > For the virtual routers, you can go to each of them in turn in the
> > > UI and there will be a upgrade router button.
> > >
> > > You get to these through the infrastructure tab in the UI.
> > >
> > >
> > >
> > >
> > > paul.an...@shapeblue.com
> > > www.shapeblue.com
> > > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> > >
> > >
> > >
> > >
> > > -----Original Message-----
> > > From: Jevgeni Zolotarjov [mailto:j.zolotar...@gmail.com]
> > > Sent: 09 February 2018 15:55
> > > To: users@cloudstack.apache.org
> > > Subject: Re: cloudstack-management fails to start after upgrade 4.10 ->
> > > 4.11
> > >
> > > Destroyed VMs? Nooooo. Definitely not.
> > >
> > > At least I can see them all under web console Home->Instances. Can make
> > > snapshots even
> > >
> > > My todays management server log https://www.sendspace.com/file/nxxgg0
> > >
> > >
> > >
> > > On Fri, Feb 9, 2018 at 5:33 PM, Paul Angus <paul.an...@shapeblue.com>
> > > wrote:
> > >
> > > > Hi Jevgeni,
> > > >
> > > > Can I take you off at a slight tangent...
> > > >
> > > > (Work-Job-Executor-7:ctx-a92d467b job-893/job-896 ctx-9ffd0cf1)
> > > > (logid:0afb959d) DataCenter id = '1' provided is in avoid set,
> > > > DeploymentPlanner cannot allocate the VM, returning.
> > > >
> > > > Have you destroyed your system vms so that new 4.11 system vms have
> > > > been deployed?
> > > >
> > > > It may help us if you can paste all of your management log from today
> > > > into https://pastebin.com (or similar) to share with us.
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > paul.an...@shapeblue.com
> > > > www.shapeblue.com
> > > > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> > > >
> > > >
> > > >
> > > >
> > > > -----Original Message-----
> > > > From: Jevgeni Zolotarjov [mailto:j.zolotar...@gmail.com]
> > > > Sent: 09 February 2018 15:31
> > > > To: users@cloudstack.apache.org
> > > > Subject: Re: cloudstack-management fails to start after upgrade 4.10
> ->
> > > > 4.11
> > > >
> > > > the same result
> > > >
> > > > [root@mtl1-apphst03 management]# virsh list --all
> > > >  Id    Name                           State
> > > > ----------------------------------------------------
> > > >  1     v-1-VM                         running
> > > >  2     s-2-VM                         running
> > > >
> > > >
> > > >
> > > > On Fri, Feb 9, 2018 at 5:28 PM, Daan Hoogland <
> daan.hoogl...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > >  so try
> > > > > # virsh list --all
> > > > >
> > > > > On Fri, Feb 9, 2018 at 4:27 PM, Jevgeni Zolotarjov
> > > > > <j.zolotar...@gmail.com
> > > > > >
> > > > > wrote:
> > > > >
> > > > > > I guess, these are system VMs.
> > > > > > none of my own created instances can start.
> > > > > >  Id    Name                           State
> > > > > > ----------------------------------------------------
> > > > > >  1     v-1-VM                         running
> > > > > >  2     s-2-VM                         running
> > > > > >
> > > > > > virsh start <whateve number > 3> gives error
> > > > > > error: failed to get domain '8'
> > > > > > error: Domain not found: no domain with matching name '8'
> > > > > >
> > > > > >
> > > > > > And yes, cloudstack and mysql - everything locally
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Fri, Feb 9, 2018 at 5:15 PM, Daan Hoogland
> > > > > > <daan.hoogl...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > On Fri, Feb 9, 2018 at 3:58 PM, Jevgeni Zolotarjov <
> > > > > > j.zolotar...@gmail.com
> > > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > * how to start instance with virsh?
> > > > > > > >
> > > > > > > ​ah, usually something like
> > > > > > > # virsh start <name>
> > > > > > > or
> > > > > > > # virsh start <​number>
> > > > > > >
> > > > > > >
> > > > > > > > * starting in debugger? I would rather not do it :)
> > > > > > > >
> > > > > > > ​ok, i expected as much. It will slow our solution process
> > > > > unfortunately
> > > > > > > ​
> > > > > > >
> > > > > > > ​In the below at the bottom it says you have two VMs and both
> are
> > > > > > running.
> > > > > > > Is that correct?​
> > > > > > >
> > > > > > >
> > > > > > > > * this is the log around GetHostStatsAnswer
> > > > > > > > 2018-02-09 14:05:30,274 DEBUG [c.c.s.StatsCollector]
> > > > > > > > (StatsCollector-4:ctx-167407f0) (logid:0c470e95)
> > > > > > > > HostStatsCollector
> > > > > is
> > > > > > > > running...
> > > > > > > > 2018-02-09 14:05:30,322 DEBUG [c.c.a.t.Request]
> > > > > > > > (StatsCollector-4:ctx-167407f0) (logid:0c470e95) Seq
> > > > > > > 3-613896924205940798:
> > > > > > > > Received:  { Ans: , MgmtId: 264216221068220, via:
> > > > > > > > 3(mtl1-apphst03),
> > > > > > Ver:
> > > > > > > > v1, Flags: 10, { GetHostStatsAnswer } }
> > > > > > > > 2018-02-09 14:05:31,388 DEBUG [c.c.s.StatsCollector]
> > > > > > > > (StatsCollector-1:ctx-308796ad) (logid:9e87351b)
> > > > > > > > StorageCollector is running...
> > > > > > > > 2018-02-09 14:05:31,396 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
> > > > > > > > (StatsCollector-1:ctx-308796ad) (logid:9e87351b)
> > > > > > > getCommandHostDelegation:
> > > > > > > > class com.cloud.agent.api.GetStorageStatsCommand
> > > > > > > > 2018-02-09 14:05:31,396 DEBUG [c.c.h.XenServerGuru]
> > > > > > > > (StatsCollector-1:ctx-308796ad) (logid:9e87351b) We are
> > > > > > > > returning
> > > > > the
> > > > > > > > default host to execute commands because the command is not
> of
> > > > > > > > Copy
> > > > > > type.
> > > > > > > > 2018-02-09 14:05:31,445 DEBUG [c.c.a.t.Request]
> > > > > > > > (StatsCollector-1:ctx-308796ad) (logid:9e87351b) Seq
> > > > > > > > 4-1941614389350105109:
> > > > > > > > Received:  { Ans: , MgmtId: 264216221068220, via: 4(s-2-VM),
> > > > > > > > Ver: v1,
> > > > > > > > Flags: 10, { GetStorageStatsAnswer } }
> > > > > > > > 2018-02-09 14:05:31,447 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
> > > > > > > > (StatsCollector-1:ctx-308796ad) (logid:9e87351b)
> > > > > > > getCommandHostDelegation:
> > > > > > > > class com.cloud.agent.api.GetStorageStatsCommand
> > > > > > > > 2018-02-09 14:05:31,447 DEBUG [c.c.h.XenServerGuru]
> > > > > > > > (StatsCollector-1:ctx-308796ad) (logid:9e87351b) We are
> > > > > > > > returning
> > > > > the
> > > > > > > > default host to execute commands because the command is not
> of
> > > > > > > > Copy
> > > > > > type.
> > > > > > > > 2018-02-09 14:05:31,517 DEBUG [c.c.a.t.Request]
> > > > > > > > (StatsCollector-1:ctx-308796ad) (logid:9e87351b) Seq
> > > > > > > 3-613896924205940799:
> > > > > > > > Received:  { Ans: , MgmtId: 264216221068220, via:
> > > > > > > > 3(mtl1-apphst03),
> > > > > > Ver:
> > > > > > > > v1, Flags: 10, { GetStorageStatsAnswer } }
> > > > > > > > 2018-02-09 14:05:34,504 INFO  [o.a.c.f.j.i.
> > AsyncJobManagerImpl]
> > > > > > > > (AsyncJobMgr-Heartbeat-1:ctx-6955e300) (logid:300ab196)
> Begin
> > > > > cleanup
> > > > > > > > expired async-jobs
> > > > > > > > 2018-02-09 14:05:34,506 INFO  [o.a.c.f.j.i.
> > AsyncJobManagerImpl]
> > > > > > > > (AsyncJobMgr-Heartbeat-1:ctx-6955e300) (logid:300ab196) End
> > > > > > > > cleanup expired async-jobs
> > > > > > > > 2018-02-09 14:05:35,979 DEBUG
> > > > > > > > [o.a.c.s.SecondaryStorageManagerImpl]
> > > > > > > > (secstorage-1:ctx-e83be1a3) (logid:60b9de64) Zone 1 is ready
> to
> > > > > launch
> > > > > > > > secondary storage VM
> > > > > > > > 2018-02-09 14:05:36,002 DEBUG [c.c.c.
> ConsoleProxyManagerImpl]
> > > > > > > > (consoleproxy-1:ctx-508f06d4) (logid:fea1da44) Zone 1 is
> ready
> > > > > > > > to
> > > > > > launch
> > > > > > > > console proxy
> > > > > > > > 2018-02-09 14:05:36,697 DEBUG [c.c.a.m.AgentManagerImpl]
> > > > > > > > (AgentManager-Handler-14:null) (logid:) SeqA 5-273:
> Processing
> > > > > > > > Seq
> > > > > > 5-273:
> > > > > > > > { Cmd , MgmtId: -1, via: 5, Ver: v1, Flags: 11,
> > > > > > > > [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand"
> > > > > > > > :{"_proxyVmId":1,"_loadInfo":"{\n
> > > > > > > > \"connections\": []\n}","wait":0}}] }
> > > > > > > > 2018-02-09 14:05:36,699 DEBUG [c.c.a.m.AgentManagerImpl]
> > > > > > > > (AgentManager-Handler-14:null) (logid:) SeqA 5-273: Sending
> Seq
> > > > > > 5-273:  {
> > > > > > > > Ans: , MgmtId: 264216221068220, via: 5, Ver: v1, Flags:
> 100010,
> > > > > > > > [{"com.cloud.agent.api.AgentControlAnswer":{"result":
> > > > > true,"wait":0}}]
> > > > > > }
> > > > > > > > 2018-02-09 14:05:44,502 INFO  [o.a.c.f.j.i.
> > AsyncJobManagerImpl]
> > > > > > > > (AsyncJobMgr-Heartbeat-1:ctx-61f44674) (logid:94748b08)
> Begin
> > > > > cleanup
> > > > > > > > expired async-jobs
> > > > > > > > 2018-02-09 14:05:44,505 INFO  [o.a.c.f.j.i.
> > AsyncJobManagerImpl]
> > > > > > > > (AsyncJobMgr-Heartbeat-1:ctx-61f44674) (logid:94748b08) End
> > > > > > > > cleanup expired async-jobs
> > > > > > > > 2018-02-09 14:05:44,582 DEBUG [c.c.n.r.
> > > > > VirtualNetworkApplianceManager
> > > > > > > Impl]
> > > > > > > > (RouterStatusMonitor-1:ctx-9eea60cf) (logid:171472b2) Found
> 0
> > > > > routers
> > > > > > to
> > > > > > > > update status.
> > > > > > > > 2018-02-09 14:05:44,583 DEBUG [c.c.n.r.
> > > > > VirtualNetworkApplianceManager
> > > > > > > Impl]
> > > > > > > > (RouterStatusMonitor-1:ctx-9eea60cf) (logid:171472b2) Found
> 0
> > > > > > > > VPC
> > > > > > > networks
> > > > > > > > to update Redundant State.
> > > > > > > > 2018-02-09 14:05:44,584 DEBUG [c.c.n.r.
> > > > > VirtualNetworkApplianceManager
> > > > > > > Impl]
> > > > > > > > (RouterStatusMonitor-1:ctx-9eea60cf) (logid:171472b2) Found
> 0
> > > > > networks
> > > > > > > to
> > > > > > > > update RvR status.
> > > > > > > > 2018-02-09 14:05:46,700 DEBUG [c.c.a.m.AgentManagerImpl]
> > > > > > > > (AgentManager-Handler-15:null) (logid:) SeqA 5-274:
> Processing
> > > > > > > > Seq
> > > > > > 5-274:
> > > > > > > > { Cmd , MgmtId: -1, via: 5, Ver: v1, Flags: 11,
> > > > > > > > [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand"
> > > > > > > > :{"_proxyVmId":1,"_loadInfo":"{\n
> > > > > > > > \"connections\": []\n}","wait":0}}] }
> > > > > > > > 2018-02-09 14:05:46,702 DEBUG [c.c.a.m.AgentManagerImpl]
> > > > > > > > (AgentManager-Handler-15:null) (logid:) SeqA 5-274: Sending
> Seq
> > > > > > 5-274:  {
> > > > > > > > Ans: , MgmtId: 264216221068220, via: 5, Ver: v1, Flags:
> 100010,
> > > > > > > > [{"com.cloud.agent.api.AgentControlAnswer":{"result":
> > > > > true,"wait":0}}]
> > > > > > }
> > > > > > > > 2018-02-09 14:05:49,610 DEBUG [c.c.h.d.HostDaoImpl]
> > > > > > > (ClusteredAgentManager
> > > > > > > > Timer:ctx-6233f81e) (logid:42d0cd7b) Resetting hosts suitable
> > > > > > > > for
> > > > > > > reconnect
> > > > > > > > 2018-02-09 14:05:49,611 DEBUG [c.c.h.d.HostDaoImpl]
> > > > > > > (ClusteredAgentManager
> > > > > > > > Timer:ctx-6233f81e) (logid:42d0cd7b) Completed resetting
> hosts
> > > > > suitable
> > > > > > > for
> > > > > > > > reconnect
> > > > > > > > 2018-02-09 14:05:49,611 DEBUG [c.c.h.d.HostDaoImpl]
> > > > > > > (ClusteredAgentManager
> > > > > > > > Timer:ctx-6233f81e) (logid:42d0cd7b) Acquiring hosts for
> > > > > > > > clusters
> > > > > > already
> > > > > > > > owned by this management server
> > > > > > > > 2018-02-09 14:05:49,611 DEBUG [c.c.h.d.HostDaoImpl]
> > > > > > > (ClusteredAgentManager
> > > > > > > > Timer:ctx-6233f81e) (logid:42d0cd7b) Completed acquiring
> hosts
> > > > > > > > for
> > > > > > > clusters
> > > > > > > > already owned by this management server
> > > > > > > > 2018-02-09 14:05:49,611 DEBUG [c.c.h.d.HostDaoImpl]
> > > > > > > (ClusteredAgentManager
> > > > > > > > Timer:ctx-6233f81e) (logid:42d0cd7b) Acquiring hosts for
> > > > > > > > clusters not
> > > > > > > owned
> > > > > > > > by any management server
> > > > > > > > 2018-02-09 14:05:49,612 DEBUG [c.c.h.d.HostDaoImpl]
> > > > > > > (ClusteredAgentManager
> > > > > > > > Timer:ctx-6233f81e) (logid:42d0cd7b) Completed acquiring
> hosts
> > > > > > > > for
> > > > > > > clusters
> > > > > > > > not owned by any management server
> > > > > > > > 2018-02-09 14:05:51,698 DEBUG [c.c.a.m.AgentManagerImpl]
> > > > > > > > (AgentManager-Handler-1:null) (logid:) SeqA 5-275: Processing
> > > > > > > > Seq
> > > > > > > 5-275:  {
> > > > > > > > Cmd , MgmtId: -1, via: 5, Ver: v1, Flags: 11,
> > > > > > > > [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand"
> > > > > > > > :{"_proxyVmId":1,"_loadInfo":"{\n
> > > > > > > > \"connections\": []\n}","wait":0}}] }
> > > > > > > > 2018-02-09 14:05:51,700 DEBUG [c.c.a.m.AgentManagerImpl]
> > > > > > > > (AgentManager-Handler-1:null) (logid:) SeqA 5-275: Sending
> Seq
> > > > 5-275:
> > > > > > {
> > > > > > > > Ans: , MgmtId: 264216221068220, via: 5, Ver: v1, Flags:
> 100010,
> > > > > > > > [{"com.cloud.agent.api.AgentControlAnswer":{"result":
> > > > > true,"wait":0}}]
> > > > > > }
> > > > > > > > 2018-02-09 14:05:54,503 INFO  [o.a.c.f.j.i.
> > AsyncJobManagerImpl]
> > > > > > > > (AsyncJobMgr-Heartbeat-1:ctx-e6965f65) (logid:dbadcba0)
> Begin
> > > > > cleanup
> > > > > > > > expired async-jobs
> > > > > > > > 2018-02-09 14:05:54,505 INFO  [o.a.c.f.j.i.
> > AsyncJobManagerImpl]
> > > > > > > > (AsyncJobMgr-Heartbeat-1:ctx-e6965f65) (logid:dbadcba0) End
> > > > > > > > cleanup expired async-jobs
> > > > > > > > 2018-02-09 14:06:01,699 DEBUG [c.c.a.m.AgentManagerImpl]
> > > > > > > > (AgentManager-Handler-5:null) (logid:) SeqA 5-276: Processing
> > > > > > > > Seq
> > > > > > > 5-276:  {
> > > > > > > > Cmd , MgmtId: -1, via: 5, Ver: v1, Flags: 11,
> > > > > > > > [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand"
> > > > > > > > :{"_proxyVmId":1,"_loadInfo":"{\n
> > > > > > > > \"connections\": []\n}","wait":0}}] }
> > > > > > > > 2018-02-09 14:06:01,702 DEBUG [c.c.a.m.AgentManagerImpl]
> > > > > > > > (AgentManager-Handler-5:null) (logid:) SeqA 5-276: Sending
> Seq
> > > > 5-276:
> > > > > > {
> > > > > > > > Ans: , MgmtId: 264216221068220, via: 5, Ver: v1, Flags:
> 100010,
> > > > > > > > [{"com.cloud.agent.api.AgentControlAnswer":{"result":
> > > > > true,"wait":0}}]
> > > > > > }
> > > > > > > > 2018-02-09 14:06:04,502 INFO  [o.a.c.f.j.i.
> > AsyncJobManagerImpl]
> > > > > > > > (AsyncJobMgr-Heartbeat-1:ctx-3feb65b6) (logid:e10057ae)
> Begin
> > > > > cleanup
> > > > > > > > expired async-jobs
> > > > > > > > 2018-02-09 14:06:04,507 INFO  [o.a.c.f.j.i.
> > AsyncJobManagerImpl]
> > > > > > > > (AsyncJobMgr-Heartbeat-1:ctx-3feb65b6) (logid:e10057ae) End
> > > > > > > > cleanup expired async-jobs
> > > > > > > > 2018-02-09 14:06:05,978 DEBUG
> > > > > > > > [o.a.c.s.SecondaryStorageManagerImpl]
> > > > > > > > (secstorage-1:ctx-8cfe1d93) (logid:405eee90) Zone 1 is ready
> to
> > > > > launch
> > > > > > > > secondary storage VM
> > > > > > > > 2018-02-09 14:06:06,000 DEBUG [c.c.c.
> ConsoleProxyManagerImpl]
> > > > > > > > (consoleproxy-1:ctx-1680ddc3) (logid:9d963dfc) Zone 1 is
> ready
> > > > > > > > to
> > > > > > launch
> > > > > > > > console proxy
> > > > > > > > 2018-02-09 14:06:11,701 DEBUG [c.c.a.m.AgentManagerImpl]
> > > > > > > > (AgentManager-Handler-2:null) (logid:) SeqA 5-277: Processing
> > > > > > > > Seq
> > > > > > > 5-277:  {
> > > > > > > > Cmd , MgmtId: -1, via: 5, Ver: v1, Flags: 11,
> > > > > > > > [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand"
> > > > > > > > :{"_proxyVmId":1,"_loadInfo":"{\n
> > > > > > > > \"connections\": []\n}","wait":0}}] }
> > > > > > > > 2018-02-09 14:06:11,703 DEBUG [c.c.a.m.AgentManagerImpl]
> > > > > > > > (AgentManager-Handler-2:null) (logid:) SeqA 5-277: Sending
> Seq
> > > > 5-277:
> > > > > > {
> > > > > > > > Ans: , MgmtId: 264216221068220, via: 5, Ver: v1, Flags:
> 100010,
> > > > > > > > [{"com.cloud.agent.api.AgentControlAnswer":{"result":
> > > > > true,"wait":0}}]
> > > > > > }
> > > > > > > > 2018-02-09 14:06:14,503 INFO  [o.a.c.f.j.i.
> > AsyncJobManagerImpl]
> > > > > > > > (AsyncJobMgr-Heartbeat-1:ctx-0a480d6a) (logid:b7a572a2)
> Begin
> > > > > cleanup
> > > > > > > > expired async-jobs
> > > > > > > > 2018-02-09 14:06:14,505 INFO  [o.a.c.f.j.i.
> > AsyncJobManagerImpl]
> > > > > > > > (AsyncJobMgr-Heartbeat-1:ctx-0a480d6a) (logid:b7a572a2) End
> > > > > > > > cleanup expired async-jobs
> > > > > > > > 2018-02-09 14:06:14,582 DEBUG [c.c.n.r.
> > > > > VirtualNetworkApplianceManager
> > > > > > > Impl]
> > > > > > > > (RouterStatusMonitor-1:ctx-528acad3) (logid:a77479b7) Found
> 0
> > > > > routers
> > > > > > to
> > > > > > > > update status.
> > > > > > > > 2018-02-09 14:06:14,583 DEBUG [c.c.n.r.
> > > > > VirtualNetworkApplianceManager
> > > > > > > Impl]
> > > > > > > > (RouterStatusMonitor-1:ctx-528acad3) (logid:a77479b7) Found
> 0
> > > > > > > > VPC
> > > > > > > networks
> > > > > > > > to update Redundant State.
> > > > > > > > 2018-02-09 14:06:14,584 DEBUG [c.c.n.r.
> > > > > VirtualNetworkApplianceManager
> > > > > > > Impl]
> > > > > > > > (RouterStatusMonitor-1:ctx-528acad3) (logid:a77479b7) Found
> 0
> > > > > networks
> > > > > > > to
> > > > > > > > update RvR status.
> > > > > > > > 2018-02-09 14:06:16,892 DEBUG [c.c.a.m.AgentManagerImpl]
> > > > > > > > (AgentManager-Handler-4:null) (logid:) Ping from
> > > > > > > > 3(mtl1-apphst03)
> > > > > > > > 2018-02-09 14:06:16,892 DEBUG
> > > > > > > > [c.c.v.VirtualMachinePowerStateSyncIm
> > > > > pl]
> > > > > > > > (AgentManager-Handler-4:null) (logid:) Process host VM state
> > > > > > > > report
> > > > > > from
> > > > > > > > ping process. host: 3
> > > > > > > > 2018-02-09 14:06:16,895 DEBUG
> > > > > > > > [c.c.v.VirtualMachinePowerStateSyncIm
> > > > > pl]
> > > > > > > > (AgentManager-Handler-4:null) (logid:) Process VM state
> report.
> > > > host:
> > > > > > 3,
> > > > > > > > number of records in report: 2
> > > > > > > > 2018-02-09 14:06:16,895 DEBUG
> > > > > > > > [c.c.v.VirtualMachinePowerStateSyncIm
> > > > > pl]
> > > > > > > > (AgentManager-Handler-4:null) (logid:) VM state report. host:
> > 3,
> > > > > > > > vm
> > > > > id:
> > > > > > > 1,
> > > > > > > > power state: PowerOn
> > > > > > > > 2018-02-09 14:06:16,897 DEBUG
> > > > > > > > [c.c.v.VirtualMachinePowerStateSyncIm
> > > > > pl]
> > > > > > > > (AgentManager-Handler-4:null) (logid:) VM power state does
> not
> > > > > change,
> > > > > > > skip
> > > > > > > > DB writing. vm id: 1
> > > > > > > > 2018-02-09 14:06:16,897 DEBUG
> > > > > > > > [c.c.v.VirtualMachinePowerStateSyncIm
> > > > > pl]
> > > > > > > > (AgentManager-Handler-4:null) (logid:) VM state report. host:
> > 3,
> > > > > > > > vm
> > > > > id:
> > > > > > > 2,
> > > > > > > > power state: PowerOn
> > > > > > > > 2018-02-09 14:06:16,898 DEBUG
> > > > > > > > [c.c.v.VirtualMachinePowerStateSyncIm
> > > > > pl]
> > > > > > > > (AgentManager-Handler-4:null) (logid:) VM power state does
> not
> > > > > change,
> > > > > > > skip
> > > > > > > > DB writing. vm id: 2
> > > > > > > > 2018-02-09 14:06:16,899 DEBUG
> > > > > > > > [c.c.v.VirtualMachinePowerStateSyncIm
> > > > > pl]
> > > > > > > > (AgentManager-Handler-4:null) (logid:) Done with process of
> VM
> > > > > > > > state report. host: 3
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > On Fri, Feb 9, 2018 at 4:47 PM, Daan Hoogland <
> > > > > daan.hoogl...@gmail.com
> > > > > > >
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > and with cloudstack (and mysql) running locally, right?
> > > > > > > > > So my next step would be to stop cloudstack and see if you
> > can
> > > > > start
> > > > > > > the
> > > > > > > > > image with virsh?
> > > > > > > > > Cloudstack thinks for some reason the host is not a
> suitable
> > > > > target.
> > > > > > I
> > > > > > > > have
> > > > > > > > > no clue why that is from your log. You can start cloudstack
> > in
> > > > > > > > > a
> > > > > > > debugger
> > > > > > > > > but given you are asking on users@, i don't think you are
> > > > > > > > > familiar
> > > > > > > with
> > > > > > > > > that kind of work are, you?
> > > > > > > > > You might also want to look in the management (and agent)
> log
> > > > > > > > > to
> > > > > see
> > > > > > if
> > > > > > > > you
> > > > > > > > > can find anything about the capacity reported. Look for
> > > > > > > > GetHostStatsAnswer.
> > > > > > > > >
> > > > > > > > > On Fri, Feb 9, 2018 at 3:11 PM, Jevgeni Zolotarjov <
> > > > > > > > j.zolotar...@gmail.com
> > > > > > > > > >
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Yes, its KVM
> > > > > > > > > >
> > > > > > > > > > On Fri, Feb 9, 2018 at 4:06 PM, Daan Hoogland <
> > > > > > > daan.hoogl...@gmail.com
> > > > > > > > >
> > > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > I am at a loss but really would like to help you.
> > > > > > > > > > > is it a KVM with the management server running locally?
> > > > > > > > > > >
> > > > > > > > > > > On Fri, Feb 9, 2018 at 3:02 PM, Jevgeni Zolotarjov <
> > > > > > > > > > j.zolotar...@gmail.com
> > > > > > > > > > > >
> > > > > > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > > My Host is only 1 machine atm:
> > > > > > > > > > > > Dell PowerEdge610
> > > > > > > > > > > > OS: CentOS7 (latest)
> > > > > > > > > > > > CPU: 2x12 cores (24 cores in total)
> > > > > > > > > > > > RAM: 192 GB
> > > > > > > > > > > > Storage: 3+ TB
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > Daan
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > Daan
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Daan
> > > > >
> > > >
> > >
> >
> >
> >
> > --
> > Daan
> >
>

Reply via email to