Greg,

what does it mean "***Recently***, we downgraded back to 4.9.3 to re-run
the processes" - have you created a lot of new resources with the upgraded
env (4.11 code) and then restored the DB to 4.9.3 ? If so, you need to
delete ALL (I mean all) resources you might have created - any VMs, VRs,
volumes, networks (i.e. bridges or similar) etc, otherwise you will hit all
kind of issues when ACS 4.9.3 tries to create volumes/VMs of the same name,
create bridges/vnets of the same name etc (which already exist due to being
created by 4.11)

Rolling back after some time running the new code/creating new resources is
a challenge to handle - and NOT recommended by all means.

Best,
Andrija

On Fri, 13 Dec 2019 at 10:03, Thomas Joseph <thomas.jo...@gmail.com> wrote:

> Hello Greg,
>
> Check if any of your primary or secondary storage is in maintenance mode -
> specific to the uuid mentioned in the error message -
> Can't find volume:cb1b9a18-6fdb-4613-8153-9595c2f70f58","wait":0
> Do you have the version related system template in place and have you
> updated the template version in the global settings parameter?
>
> With regards
> Thomas
>
> On Thu, 12 Dec 2019, 10:34 pm Greg Goodrich, <ggoodr...@ippathways.com>
> wrote:
>
> > We’ve been doing test runs in our staging environment of upgrading from
> > 4.9.3 to 4.11, leveraging KVM on our 2 hosts. Recently, we downgraded
> back
> > to 4.9.3 to re-run the processes. We restored the database, and the
> > software on all machines, etc. However, now, the virtual routers are
> > failing to start, and I’m having difficulties tracking down exactly what
> is
> > going wrong. I’ll admit to not being adept at working with CloudStack
> yet.
> > I’ve tried restarting the network, both with and without clean up. I’ve
> > also tried starting the routers via the infrastructure -> routers area.
> > After a decent amount of time, the status is updated to Stopped. The VMs
> > never seem to come online, as I’ve checked virsh on both of our hosts,
> and
> > the only VMs that seem to be running are the console proxy and secondary
> > storage VMs. I also tried restarting the agents on both hosts, as well as
> > the management node and libvirt.
> >
> > Here are some snippets from the logs that seem concerning to me, that may
> > or may not help (these are not in order, and one is from restarting
> > network, other from starting VR manually):
> >
> > 2019-12-12 18:00:34,557 DEBUG [c.c.a.t.Request]
> > (AgentManager-Handler-10:null) (logid:) Seq 96-6863767307089348712:
> > Processing:  { Ans: , MgmtId: 345051498372, via: 96, Ver: v1, Flags: 110,
> >
> [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException:
> > com.cloud.utils.exception.CloudRuntimeException: Can't find
> > volume:cb1b9a18-6fdb-4613-8153-9595c2f70f58","wait":0}}] }
> >
> > 2019-12-12 15:44:26,623 DEBUG [c.c.a.t.Request]
> > (AgentManager-Handler-4:null) (logid:) Seq 96-6863767307089348508:
> > Processing:  { Ans: , MgmtId: 345051498372, via: 96, Ver: v1, Flags: 10,
> >
> [{"com.cloud.agent.api.UnsupportedAnswer":{"result":false,"details":"Unsupported
> > command issued: com.cloud.agent.api.StartCommand.  Are you sure you got
> the
> > right type of
> >
> server?","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> > by previous
> >
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> > by previous
> >
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> > by previous
> >
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> > by previous
> >
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> > by previous
> >
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> > by previous
> >
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> > by previous
> >
> failure","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
> > by previous failure","wait":0}}] }
> > 2019-12-12 15:44:26,623 DEBUG [c.c.a.t.Request]
> > (Work-Job-Executor-20:ctx-a8bb9961 job-31381/job-31386 ctx-951bce3f)
> > (logid:010e3619) Seq 96-6863767307089348508: Received:  { Ans: , MgmtId:
> > 345051498372, via: 96(labcloudkvm01.ipplab.corp), Ver: v1, Flags: 10, {
> > UnsupportedAnswer, Answer, Answer, Answer, Answer, Answer, Answer,
> Answer,
> > Answer } }
> >
> > Any hints on which path I should go down next would be greatly
> appreciated!
> >
> > --
> > Greg Goodrich | IP Pathways
> > Senior Developer
> > 3600 109th Street | Urbandale, IA 50322
> > p. 515.422.9346 | e. ggoodr...@ippathways.com<mailto:j...@ippathways.com
> >
> >
> >
>


-- 

Andrija Panić

Reply via email to