Re: Disaster after maintenance

2019-03-20 Thread Sergey Levitskiy
+1 on the advice to start from scratch.



Provisioning is failing because it can’t spin up either SSVM or proxy due to 
not enough capacity. The reason might be:

  *   Not enough capacity either CPU or RAM. increasing overprovisioning 
factors or reducing disable thresholds might help.
  *   Hosts in error state
  *   Cluster disabled
  *   Problem accessing primary and/or secondary storage mount from management 
server host







2019-03-20 15:07:39,218 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
(Work-Job-Executor-37:ctx-3cad2de4 job-5120/job-7077 ctx-6b705264) 
(logid:49483c7a) Trying to allocate a host and storage pools from dc:3, 
pod:null,cluster:null, requested cpu: 500, requested ram: 536870912

2019-03-20 15:07:39,218 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
(Work-Job-Executor-37:ctx-3cad2de4 job-5120/job-7077 ctx-6b705264) 
(logid:49483c7a) Is ROOT volume READY (pool already allocated)?: No

2019-03-20 15:07:39,219 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
(Work-Job-Executor-37:ctx-3cad2de4 job-5120/job-7077 ctx-6b705264) 
(logid:49483c7a) Deploy avoids pods: null, clusters: null, hosts: null

2019-03-20 15:07:39,219 DEBUG [c.c.d.FirstFitPlanner] 
(Work-Job-Executor-37:ctx-3cad2de4 job-5120/job-7077 ctx-6b705264) 
(logid:49483c7a) Searching all possible resources under this Zone: 3

2019-03-20 15:07:39,219 DEBUG [c.c.d.FirstFitPlanner] 
(Work-Job-Executor-38:ctx-f824bfeb job-5119/job-7076 ctx-9498) 
(logid:bc39cd2a) No clusters found having a host with enough capacity, 
returning.

2019-03-20 15:07:39,219 DEBUG [c.c.d.FirstFitPlanner] 
(Work-Job-Executor-37:ctx-3cad2de4 job-5120/job-7077 ctx-6b705264) 
(logid:49483c7a) Listing clusters in order of aggregate capacity, that have 
(atleast one host with) enough CPU and RAM capacity under this Zone: 3

2019-03-20 15:07:39,221 DEBUG [c.c.d.FirstFitPlanner] 
(Work-Job-Executor-37:ctx-3cad2de4 job-5120/job-7077 ctx-6b705264) 
(logid:49483c7a) No clusters found having a host with enough capacity, 
returning.





On 3/20/19, 10:38 AM, "Andrija Panic"  wrote:



Hi Jevgeni,



I would perhaps consider you continue with plan B from your separate email

thread (root volumes --> create snapshots, convert snaps to template,

download template somewhere safe - for DATA volumes, also create snapshots,

then convert to volume and download it (or simply directly download

existing DATA volume if VM is stopped).

Once you are safe, and all templates, and VM volumes are safe, you are good

to reinstall.

Seriously, I'm not sure how to proceed via ML - if this was my own setup,

probably would be able to fix it...



In next installment, start with clean 4.11.2 (4.10 was never released as an

official release and was SERIOUSLY broken), or even 4.12 which has just

been released (will be in 1-2 days).

In this new installment, please dedicate a VM (or physical server) that

will host mgmt+DB+NFS (or even better separate NFS on different server

etc.) - but certainly do NOT collocate management components with KVM role.

When you build zone successfully, you can then import all templates and

upload all volumes (which you saved previously to some external place, web

server)

This will allow you to restore your VMs - possibly with just different IPs

versus original ones.



Considering failed DB upgrades and issues you see now, I assume your env,

might be severely broken at this point, and warrants starting from

scratch...



Hope that makes sense - so again, download all root and data volumes to

safe place (consider some petrol + matches fun) and then reinstall with a

fresh and shiny infra.



Alternatively, I would try to wipe all new zones (this takes some time and

certain steps) and then continue troubleshooting with failed-to-start VRs.



Cheers



On Wed, 20 Mar 2019 at 17:59, Jevgeni Zolotarjov 

wrote:



> It started with 4.10 and then gradually upgraded with all stops, when new

> releases were available.

>

>

> >>> Why do you have 3 zones in this installation - what is the setup ?

> >>> SSVM and CPVM (for whatever zone) are failing to be created...

> Its a result of attempts to create new zone and somehow move VMs to this

> new zone. These all are unsuccessful attempts.

> Before problem started there was 1 zone and There should be just 1 zone in

> reality.

>

>

> >>> yes, the VR can't be started, it get's timeout - in AGENT logs, I see

> that

> >>> it attemps to create a volume on primary storage...

> I guess this is the root cause. I checked, and primary storage is

> accessible via NFS share on both hosts. How to troubleshoot it?

>

>

> On Wed, Mar 20, 2019 at 6:29 PM Andrija Panic 

> wrote:

>

> > Hi,

> >

> > 2019-03-20 06:41:50,446 INFO  [c.c.u.DatabaseUpgradeChecker] (main:null)

> > 

Re: Disaster after maintenance

2019-03-20 Thread Andrija Panic
Hi Jevgeni,

I would perhaps consider you continue with plan B from your separate email
thread (root volumes --> create snapshots, convert snaps to template,
download template somewhere safe - for DATA volumes, also create snapshots,
then convert to volume and download it (or simply directly download
existing DATA volume if VM is stopped).
Once you are safe, and all templates, and VM volumes are safe, you are good
to reinstall.
Seriously, I'm not sure how to proceed via ML - if this was my own setup,
probably would be able to fix it...

In next installment, start with clean 4.11.2 (4.10 was never released as an
official release and was SERIOUSLY broken), or even 4.12 which has just
been released (will be in 1-2 days).
In this new installment, please dedicate a VM (or physical server) that
will host mgmt+DB+NFS (or even better separate NFS on different server
etc.) - but certainly do NOT collocate management components with KVM role.
When you build zone successfully, you can then import all templates and
upload all volumes (which you saved previously to some external place, web
server)
This will allow you to restore your VMs - possibly with just different IPs
versus original ones.

Considering failed DB upgrades and issues you see now, I assume your env,
might be severely broken at this point, and warrants starting from
scratch...

Hope that makes sense - so again, download all root and data volumes to
safe place (consider some petrol + matches fun) and then reinstall with a
fresh and shiny infra.

Alternatively, I would try to wipe all new zones (this takes some time and
certain steps) and then continue troubleshooting with failed-to-start VRs.

Cheers

On Wed, 20 Mar 2019 at 17:59, Jevgeni Zolotarjov 
wrote:

> It started with 4.10 and then gradually upgraded with all stops, when new
> releases were available.
>
>
> >>> Why do you have 3 zones in this installation - what is the setup ?
> >>> SSVM and CPVM (for whatever zone) are failing to be created...
> Its a result of attempts to create new zone and somehow move VMs to this
> new zone. These all are unsuccessful attempts.
> Before problem started there was 1 zone and There should be just 1 zone in
> reality.
>
>
> >>> yes, the VR can't be started, it get's timeout - in AGENT logs, I see
> that
> >>> it attemps to create a volume on primary storage...
> I guess this is the root cause. I checked, and primary storage is
> accessible via NFS share on both hosts. How to troubleshoot it?
>
>
> On Wed, Mar 20, 2019 at 6:29 PM Andrija Panic 
> wrote:
>
> > Hi,
> >
> > 2019-03-20 06:41:50,446 INFO  [c.c.u.DatabaseUpgradeChecker] (main:null)
> > (logid:) DB version = 4.10.0.0 Code Version = 4.11.2.0
> > 2019-03-20 06:41:50,447 DEBUG [c.c.u.DatabaseUpgradeChecker] (main:null)
> > (logid:) Running upgrade Upgrade41000to41100 to upgrade from
> > 4.10.0.0-4.11.0.0 to 4.11.0.0
> > fails due to
> > java.sql.SQLException: Error on rename of './cloud/ldap_trust_map' to
> > './cloud/#sql2-2f01-13d' (errno: 152)
> >
> > Then later...
> >
> > com.cloud.exception.InsufficientServerCapacityException: Unable to
> create a
> > deployment for VM[SecondaryStorageVm|s-734-VM]Scope=interface
> > com.cloud.dc.DataCenter; id=3
> > com.cloud.exception.InsufficientServerCapacityException: Unable to
> create a
> > deployment for VM[ConsoleProxy|v-733-VM]Scope=interface
> > com.cloud.dc.DataCenter; id=3
> >
> > 2019-03-20 15:02:39,113 DEBUG [o.a.c.s.SecondaryStorageManagerImpl]
> > (secstorage-1:ctx-059f87f3) (logid:cf6cf89a) Zone 1 is ready to launch
> > secondary storage VM
> > 2019-03-20 15:02:39,117 DEBUG [o.a.c.s.SecondaryStorageManagerImpl]
> > (secstorage-1:ctx-059f87f3) (logid:cf6cf89a) Zone 2 is not ready to
> launch
> > secondary storage VM yet
> > 2019-03-20 15:02:39,122 DEBUG [o.a.c.s.SecondaryStorageManagerImpl]
> > (secstorage-1:ctx-059f87f3) (logid:cf6cf89a) Zone 3 is ready to launch
> > secondary storage VM
> >
> > so did you start with clean 4.11.2 install, or was it upgraded one - I
> see
> > in logs an upgrade from DB version 4.10 to 4.11 was tried and failed...
> > Why do you have 3 zones in this installation - what is the setup ?
> > SSVM and CPVM (for whatever zone) are failing to be created...
> >
> > yes, the VR can't be started, it get's timeout - in AGENT logs, I see
> that
> > it attemps to create a volume on primary storage...
> >
> >
> > Also, for SSVM I got this one...
> > 2019-03-20 14:38:09,227 DEBUG [c.c.d.FirstFitPlanner]
> > (Work-Job-Executor-96:ctx-04c5c9f2 job-5120/job-6960 ctx-fde3d4d7)
> > (logid:49483c7a) No clusters found having a host with enough capacity,
> > returning.
> >
> > Andrija
> >
> > On Wed, 20 Mar 2019 at 16:39, Jevgeni Zolotarjov  >
> > wrote:
> >
> > > Basic Zone - Yes
> > >
> > > router has been actually started/created on KVM side - not created, not
> > > started. Thats the main problem, I guess
> > >
> > > agent.log
> > > https://drive.google.com/open?id=1rATxHKqgNKo2kD23BtlrZy_9gFXC-Bq-
> > >
> > > management log
> > > 

Re: Disaster after maintenance

2019-03-20 Thread Jevgeni Zolotarjov
It started with 4.10 and then gradually upgraded with all stops, when new
releases were available.


>>> Why do you have 3 zones in this installation - what is the setup ?
>>> SSVM and CPVM (for whatever zone) are failing to be created...
Its a result of attempts to create new zone and somehow move VMs to this
new zone. These all are unsuccessful attempts.
Before problem started there was 1 zone and There should be just 1 zone in
reality.


>>> yes, the VR can't be started, it get's timeout - in AGENT logs, I see
that
>>> it attemps to create a volume on primary storage...
I guess this is the root cause. I checked, and primary storage is
accessible via NFS share on both hosts. How to troubleshoot it?


On Wed, Mar 20, 2019 at 6:29 PM Andrija Panic 
wrote:

> Hi,
>
> 2019-03-20 06:41:50,446 INFO  [c.c.u.DatabaseUpgradeChecker] (main:null)
> (logid:) DB version = 4.10.0.0 Code Version = 4.11.2.0
> 2019-03-20 06:41:50,447 DEBUG [c.c.u.DatabaseUpgradeChecker] (main:null)
> (logid:) Running upgrade Upgrade41000to41100 to upgrade from
> 4.10.0.0-4.11.0.0 to 4.11.0.0
> fails due to
> java.sql.SQLException: Error on rename of './cloud/ldap_trust_map' to
> './cloud/#sql2-2f01-13d' (errno: 152)
>
> Then later...
>
> com.cloud.exception.InsufficientServerCapacityException: Unable to create a
> deployment for VM[SecondaryStorageVm|s-734-VM]Scope=interface
> com.cloud.dc.DataCenter; id=3
> com.cloud.exception.InsufficientServerCapacityException: Unable to create a
> deployment for VM[ConsoleProxy|v-733-VM]Scope=interface
> com.cloud.dc.DataCenter; id=3
>
> 2019-03-20 15:02:39,113 DEBUG [o.a.c.s.SecondaryStorageManagerImpl]
> (secstorage-1:ctx-059f87f3) (logid:cf6cf89a) Zone 1 is ready to launch
> secondary storage VM
> 2019-03-20 15:02:39,117 DEBUG [o.a.c.s.SecondaryStorageManagerImpl]
> (secstorage-1:ctx-059f87f3) (logid:cf6cf89a) Zone 2 is not ready to launch
> secondary storage VM yet
> 2019-03-20 15:02:39,122 DEBUG [o.a.c.s.SecondaryStorageManagerImpl]
> (secstorage-1:ctx-059f87f3) (logid:cf6cf89a) Zone 3 is ready to launch
> secondary storage VM
>
> so did you start with clean 4.11.2 install, or was it upgraded one - I see
> in logs an upgrade from DB version 4.10 to 4.11 was tried and failed...
> Why do you have 3 zones in this installation - what is the setup ?
> SSVM and CPVM (for whatever zone) are failing to be created...
>
> yes, the VR can't be started, it get's timeout - in AGENT logs, I see that
> it attemps to create a volume on primary storage...
>
>
> Also, for SSVM I got this one...
> 2019-03-20 14:38:09,227 DEBUG [c.c.d.FirstFitPlanner]
> (Work-Job-Executor-96:ctx-04c5c9f2 job-5120/job-6960 ctx-fde3d4d7)
> (logid:49483c7a) No clusters found having a host with enough capacity,
> returning.
>
> Andrija
>
> On Wed, 20 Mar 2019 at 16:39, Jevgeni Zolotarjov 
> wrote:
>
> > Basic Zone - Yes
> >
> > router has been actually started/created on KVM side - not created, not
> > started. Thats the main problem, I guess
> >
> > agent.log
> > https://drive.google.com/open?id=1rATxHKqgNKo2kD23BtlrZy_9gFXC-Bq-
> >
> > management log
> > https://drive.google.com/open?id=1H2jI0roeiWxtzReB8qV6QxDkNpaki99A
> >
> > >> Can you confirm your zone/pod/cluster/hosts are all in Enabled state,
> > i.e.
> > YES, all green
> >
> > >> Can you connect your both KVM hosts can access/mount both Primary and
> > Secondary Storage
> > YES. Double checked
> >
> > >>>Can you also explain your infrastructure - you said you have two hosts
> > only, where does CloudStack management run?
> > 2 hosts:
> > host1: 192.168.1.14
> > host2: 192.168.1.5
> >
> > Servers are standing next to each other - connected to the same switch
> > Management server runs on the same physical server with host1
> >
> > I noticed, that Virtual router gets created after I try to start any of
> the
> > existing guest VM
> > Here are logs
> > management:
> > https://drive.google.com/open?id=1H2jI0roeiWxtzReB8qV6QxDkNpaki99A
> >
> > agent on host1:
> > https://drive.google.com/open?id=1u8YHYIuyU2MA2UKY7G5z7q8p5XxU1zsy
> >
> > agent on host2:
> > https://drive.google.com/open?id=1YzkCL-FmTgPva-QHHp5vTM5Nb3qAXxz4
> >
> > But this virtual router stays in Starting state forever and hence VMs do
> > not start either.
> >
> > On Wed, Mar 20, 2019 at 2:49 PM Andrija Panic 
> > wrote:
> >
> > > Just to confirm, you are using Basic Zone in CloudStack, right ?
> > >
> > > Can you confirm that router has been actually started/created on KVM
> > side,
> > > again, as requested please post logs (mgmt and agent - and note the
> time
> > > around which you tried to start VR last time it partially succeeded) -
> we
> > > can't guess what went wrong without logs.
> > >
> > > I would push more effort solving this one, instead of reinstalling -
> you
> > > might hit the issue again and then it's no good.
> > >
> > > Can you confirm your zone/pod/cluster/hosts are all in Enabled state,
> > i.e.
> > > not disabled and hosts connected AND both SSVM and CPVM are
> > > connectedUP/green
> > > 

CloudStack meetup - roundup and videos

2019-03-20 Thread Steve Roles
Hi all,

Great CloudStack event last week! Thanks to Ticketmaster for hosting a 
fantastic event, and thanks to all our speakers. My roundup article, including 
links to presentations and videos of all talks is here: 
https://www.shapeblue.com/cloudstack-european-user-group-cseug-roundup-london-march-2019/

Our next event is in Sofia on Thursday, June 13, and registration is open: 
https://www.eventbrite.co.uk/e/cloudstack-european-user-group-meetup-tickets-55911193886.
 We are looking for talks, and if you are interested in speaking at our event, 
please let me know.

Hope to see you soon!

steve.ro...@shapeblue.com 
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue
  
 



Re: Disaster after maintenance

2019-03-20 Thread Andrija Panic
Hi,

2019-03-20 06:41:50,446 INFO  [c.c.u.DatabaseUpgradeChecker] (main:null)
(logid:) DB version = 4.10.0.0 Code Version = 4.11.2.0
2019-03-20 06:41:50,447 DEBUG [c.c.u.DatabaseUpgradeChecker] (main:null)
(logid:) Running upgrade Upgrade41000to41100 to upgrade from
4.10.0.0-4.11.0.0 to 4.11.0.0
fails due to
java.sql.SQLException: Error on rename of './cloud/ldap_trust_map' to
'./cloud/#sql2-2f01-13d' (errno: 152)

Then later...

com.cloud.exception.InsufficientServerCapacityException: Unable to create a
deployment for VM[SecondaryStorageVm|s-734-VM]Scope=interface
com.cloud.dc.DataCenter; id=3
com.cloud.exception.InsufficientServerCapacityException: Unable to create a
deployment for VM[ConsoleProxy|v-733-VM]Scope=interface
com.cloud.dc.DataCenter; id=3

2019-03-20 15:02:39,113 DEBUG [o.a.c.s.SecondaryStorageManagerImpl]
(secstorage-1:ctx-059f87f3) (logid:cf6cf89a) Zone 1 is ready to launch
secondary storage VM
2019-03-20 15:02:39,117 DEBUG [o.a.c.s.SecondaryStorageManagerImpl]
(secstorage-1:ctx-059f87f3) (logid:cf6cf89a) Zone 2 is not ready to launch
secondary storage VM yet
2019-03-20 15:02:39,122 DEBUG [o.a.c.s.SecondaryStorageManagerImpl]
(secstorage-1:ctx-059f87f3) (logid:cf6cf89a) Zone 3 is ready to launch
secondary storage VM

so did you start with clean 4.11.2 install, or was it upgraded one - I see
in logs an upgrade from DB version 4.10 to 4.11 was tried and failed...
Why do you have 3 zones in this installation - what is the setup ?
SSVM and CPVM (for whatever zone) are failing to be created...

yes, the VR can't be started, it get's timeout - in AGENT logs, I see that
it attemps to create a volume on primary storage...


Also, for SSVM I got this one...
2019-03-20 14:38:09,227 DEBUG [c.c.d.FirstFitPlanner]
(Work-Job-Executor-96:ctx-04c5c9f2 job-5120/job-6960 ctx-fde3d4d7)
(logid:49483c7a) No clusters found having a host with enough capacity,
returning.

Andrija

On Wed, 20 Mar 2019 at 16:39, Jevgeni Zolotarjov 
wrote:

> Basic Zone - Yes
>
> router has been actually started/created on KVM side - not created, not
> started. Thats the main problem, I guess
>
> agent.log
> https://drive.google.com/open?id=1rATxHKqgNKo2kD23BtlrZy_9gFXC-Bq-
>
> management log
> https://drive.google.com/open?id=1H2jI0roeiWxtzReB8qV6QxDkNpaki99A
>
> >> Can you confirm your zone/pod/cluster/hosts are all in Enabled state,
> i.e.
> YES, all green
>
> >> Can you connect your both KVM hosts can access/mount both Primary and
> Secondary Storage
> YES. Double checked
>
> >>>Can you also explain your infrastructure - you said you have two hosts
> only, where does CloudStack management run?
> 2 hosts:
> host1: 192.168.1.14
> host2: 192.168.1.5
>
> Servers are standing next to each other - connected to the same switch
> Management server runs on the same physical server with host1
>
> I noticed, that Virtual router gets created after I try to start any of the
> existing guest VM
> Here are logs
> management:
> https://drive.google.com/open?id=1H2jI0roeiWxtzReB8qV6QxDkNpaki99A
>
> agent on host1:
> https://drive.google.com/open?id=1u8YHYIuyU2MA2UKY7G5z7q8p5XxU1zsy
>
> agent on host2:
> https://drive.google.com/open?id=1YzkCL-FmTgPva-QHHp5vTM5Nb3qAXxz4
>
> But this virtual router stays in Starting state forever and hence VMs do
> not start either.
>
> On Wed, Mar 20, 2019 at 2:49 PM Andrija Panic 
> wrote:
>
> > Just to confirm, you are using Basic Zone in CloudStack, right ?
> >
> > Can you confirm that router has been actually started/created on KVM
> side,
> > again, as requested please post logs (mgmt and agent - and note the time
> > around which you tried to start VR last time it partially succeeded) - we
> > can't guess what went wrong without logs.
> >
> > I would push more effort solving this one, instead of reinstalling - you
> > might hit the issue again and then it's no good.
> >
> > Can you confirm your zone/pod/cluster/hosts are all in Enabled state,
> i.e.
> > not disabled and hosts connected AND both SSVM and CPVM are
> > connectedUP/green
> > Is your dashboard in GUI all green - no issues there ?
> > Can you connect your both KVM hosts can access/mount both Primary and
> > Secondary Storage
> >
> > On Wed, 20 Mar 2019 at 13:15, Jevgeni Zolotarjov  >
> > wrote:
> >
> > > After dozen of attempts, the Virtual Router could finally be recreated.
> > But
> > > its in eternal Starting status, and console prompts it required upgrade
> > and
> > > Version is UNKNOWN
> > >
> > > It does not resolve the problem, I cannot move further form this point.
> > > Any hints?
> > >
> > > Or I am condemned to do reinstall cloudstack from scratch?
> > >
> > > On Wed, Mar 20, 2019 at 11:08 AM Jevgeni Zolotarjov <
> > > j.zolotar...@gmail.com>
> > > wrote:
> > >
> > > > Under this defaultGuestNetwork, I go to Virtual Appliances. There is
> no
> > > > VMS - "no data to show"
> > > >
> > > > I dont have any network, other than this single default one.
> > > >
> > > > I've tried adding new network - Add guest network. But I am not able
> 

Re: Disaster after maintenance

2019-03-20 Thread Jevgeni Zolotarjov
Basic Zone - Yes

router has been actually started/created on KVM side - not created, not
started. Thats the main problem, I guess

agent.log
https://drive.google.com/open?id=1rATxHKqgNKo2kD23BtlrZy_9gFXC-Bq-

management log
https://drive.google.com/open?id=1H2jI0roeiWxtzReB8qV6QxDkNpaki99A

>> Can you confirm your zone/pod/cluster/hosts are all in Enabled state,
i.e.
YES, all green

>> Can you connect your both KVM hosts can access/mount both Primary and
Secondary Storage
YES. Double checked

>>>Can you also explain your infrastructure - you said you have two hosts
only, where does CloudStack management run?
2 hosts:
host1: 192.168.1.14
host2: 192.168.1.5

Servers are standing next to each other - connected to the same switch
Management server runs on the same physical server with host1

I noticed, that Virtual router gets created after I try to start any of the
existing guest VM
Here are logs
management:
https://drive.google.com/open?id=1H2jI0roeiWxtzReB8qV6QxDkNpaki99A

agent on host1:
https://drive.google.com/open?id=1u8YHYIuyU2MA2UKY7G5z7q8p5XxU1zsy

agent on host2:
https://drive.google.com/open?id=1YzkCL-FmTgPva-QHHp5vTM5Nb3qAXxz4

But this virtual router stays in Starting state forever and hence VMs do
not start either.

On Wed, Mar 20, 2019 at 2:49 PM Andrija Panic 
wrote:

> Just to confirm, you are using Basic Zone in CloudStack, right ?
>
> Can you confirm that router has been actually started/created on KVM side,
> again, as requested please post logs (mgmt and agent - and note the time
> around which you tried to start VR last time it partially succeeded) - we
> can't guess what went wrong without logs.
>
> I would push more effort solving this one, instead of reinstalling - you
> might hit the issue again and then it's no good.
>
> Can you confirm your zone/pod/cluster/hosts are all in Enabled state, i.e.
> not disabled and hosts connected AND both SSVM and CPVM are
> connectedUP/green
> Is your dashboard in GUI all green - no issues there ?
> Can you connect your both KVM hosts can access/mount both Primary and
> Secondary Storage
>
> On Wed, 20 Mar 2019 at 13:15, Jevgeni Zolotarjov 
> wrote:
>
> > After dozen of attempts, the Virtual Router could finally be recreated.
> But
> > its in eternal Starting status, and console prompts it required upgrade
> and
> > Version is UNKNOWN
> >
> > It does not resolve the problem, I cannot move further form this point.
> > Any hints?
> >
> > Or I am condemned to do reinstall cloudstack from scratch?
> >
> > On Wed, Mar 20, 2019 at 11:08 AM Jevgeni Zolotarjov <
> > j.zolotar...@gmail.com>
> > wrote:
> >
> > > Under this defaultGuestNetwork, I go to Virtual Appliances. There is no
> > > VMS - "no data to show"
> > >
> > > I dont have any network, other than this single default one.
> > >
> > > I've tried adding new network - Add guest network. But I am not able to
> > do
> > > so, cause in the wizard popup, it offers empty dropdown with Zones
> > > selection. And this wizard doesnt not allow to go further without
> > selecting
> > > Zone
> > >
> > > On Wed, Mar 20, 2019 at 10:28 AM Andrija Panic <
> andrija.pa...@gmail.com>
> > > wrote:
> > >
> > >> you need to delete/remove all VMs inside this network (tick the
> > "Expunge"
> > >> button during VM deletion - if you want to really delete the VMs) in
> > order
> > >> to be able to delete the network - OR simply attach this VM to another
> > >> network, make this new network a DEFAULT one (NIC that is...), and
> then
> > >> detach from old network - and then effectively your VM was "removed"
> > from
> > >> old network - after this you should be able to delete the old
> network. I
> > >> assume some DB incosistencies perhaps, being the reason you can not
> > >> restart
> > >> the network.
> > >>
> > >> Did you try restarting some other Network - or deploying a new
> network,
> > >> spin a VM in it, then again try to restart this new network - does it
> > >> work ?
> > >>
> > >> Andrija
> > >>
> > >> On Wed, 20 Mar 2019 at 08:58, Jevgeni Zolotarjov <
> > j.zolotar...@gmail.com>
> > >> wrote:
> > >>
> > >> > >>>Stop mgmt,
> > >> > >>>Stop all agents
> > >> > >>>Restart libvirtd (and check libvirt logs afterwards)
> > >> > >>>Start agents
> > >> > >>>Start mgmt.
> > >> >
> > >> > I did that numerous time. Nothing really suspicious
> > >> > I can see that systems VMs are running - both in cloudstack console
> > and
> > >> > with virsh list -all
> > >> >
> > >> > It is apparently problem with network.
> > >> > Is there a way to force recreation of defaultGuestNetwork? or force
> > >> > recreation of Virtual Router.
> > >> > I am unable to delete network, which is supposed to rebuild network
> > with
> > >> > its router. Thats the issue
> > >> >
> > >> > The issue with libvirtd was, that eventually at some point it was
> > >> updated
> > >> > during 4 months of running, and not rebooted. It still worked. We
> had
> > to
> > >> > add listen_tcp = 1 for libvirtd to start working again.
> > >> >
> > >> > On Wed, 

CFP Open - Cloudstack Collboration Conference NA - 2019

2019-03-20 Thread Giles Sirett
The Cloudstack Collaboration Conference NA will be held 9-11 September 2019 at 
The Flamingo Hotel, Las Vegas !!

http://us.cloudstackcollab.org/

The event is being co-located with Apachecon

There are 2 x tracks of Cloudstack talks, a full day hackathon and evening 
events planned. Attendees are also welcome to attend other  Apaccon talks. It 
presents a great opportunity to learn, share ideas & problems, get to know 
other community members and have some fun together.

We've used the same format (i.e co-locating with Apachecon) for the last two 
years and both events have been a great success. This year is Apache's 20th 
anniversary, so a lot of work is going into making this Apachecon the biggest 
and best yet.


CALL FOR PRESENTATIONS (CFP)
Talks and presentations are what make these events.  The event will only be a 
success if the content is varied and interesting and that is where people on 
these lists come in. It would be great to see as many people in our  community 
as possible submitting talks for the event.


The types of talks that have traditionally worked well at CCC are things like:


  *   User/ operator stories (i.e. how we use cloudstack at foo-org)
  *   Interesting use -cases of cloudstack
  *   Functionality that you've been working on
  *   A project/integration you've been doing with AN OTHER tech & CloudStack
  *   Discussions around functional or community shortfalls
  *   Proposals for new functionality


There's no limit on what you can submit - just make sure it CloudStack related.

Please, everybody consider doing at talk at CCC to showcase some of the amazing 
work that goes on here
The CFP  is now open. Access it via http://us.cloudstackcollab.org/ or directly 
at https://asf.jamhosted.net/cfp.html CFP closes 13 May


SPONSORS
Anybody who's organisation may be able to help with sponsorship: to sponsor 
Cloudstack collaboration conference, we ask interested organisations to sponsor 
Apachecon (our hosts). Details on sponsorship can be found here: 
https://www.apachecon.com/acna19/sponsors.html
As well as the benefits of Apachecon sponsorship, you will be listed as a 
sponsor of CloudStack Collboration Conference and receive specific thanks 
during the keynote/welcome talks


Kind regards
Giles


giles.sir...@shapeblue.com 
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue
  
 



Re: Disaster after maintenance

2019-03-20 Thread Dag Sonstebo
Jevgeni,

Can you also explain your infrastructure - you said you have two hosts only, 
where does CloudStack management run?

Reason I'm asking is when checking your logs from yesterday the IP address 
192.168.1.14 seems to be used for management, NFS and a KVM host? Is this the 
case, do you co-host everything on the same server?

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue
 

On 20/03/2019, 12:49, "Andrija Panic"  wrote:

Just to confirm, you are using Basic Zone in CloudStack, right ?

Can you confirm that router has been actually started/created on KVM side,
again, as requested please post logs (mgmt and agent - and note the time
around which you tried to start VR last time it partially succeeded) - we
can't guess what went wrong without logs.

I would push more effort solving this one, instead of reinstalling - you
might hit the issue again and then it's no good.

Can you confirm your zone/pod/cluster/hosts are all in Enabled state, i.e.
not disabled and hosts connected AND both SSVM and CPVM are
connectedUP/green
Is your dashboard in GUI all green - no issues there ?
Can you connect your both KVM hosts can access/mount both Primary and
Secondary Storage

On Wed, 20 Mar 2019 at 13:15, Jevgeni Zolotarjov 
wrote:

> After dozen of attempts, the Virtual Router could finally be recreated. 
But
> its in eternal Starting status, and console prompts it required upgrade 
and
> Version is UNKNOWN
>
> It does not resolve the problem, I cannot move further form this point.
> Any hints?
>
> Or I am condemned to do reinstall cloudstack from scratch?
>
> On Wed, Mar 20, 2019 at 11:08 AM Jevgeni Zolotarjov <
> j.zolotar...@gmail.com>
> wrote:
>
> > Under this defaultGuestNetwork, I go to Virtual Appliances. There is no
> > VMS - "no data to show"
> >
> > I dont have any network, other than this single default one.
> >
> > I've tried adding new network - Add guest network. But I am not able to
> do
> > so, cause in the wizard popup, it offers empty dropdown with Zones
> > selection. And this wizard doesnt not allow to go further without
> selecting
> > Zone
> >
> > On Wed, Mar 20, 2019 at 10:28 AM Andrija Panic 
> > wrote:
> >
> >> you need to delete/remove all VMs inside this network (tick the
> "Expunge"
> >> button during VM deletion - if you want to really delete the VMs) in
> order
> >> to be able to delete the network - OR simply attach this VM to another
> >> network, make this new network a DEFAULT one (NIC that is...), and then
> >> detach from old network - and then effectively your VM was "removed"
> from
> >> old network - after this you should be able to delete the old network. 
I
> >> assume some DB incosistencies perhaps, being the reason you can not
> >> restart
> >> the network.
> >>
> >> Did you try restarting some other Network - or deploying a new network,
> >> spin a VM in it, then again try to restart this new network - does it
> >> work ?
> >>
> >> Andrija
> >>
> >> On Wed, 20 Mar 2019 at 08:58, Jevgeni Zolotarjov <
> j.zolotar...@gmail.com>
> >> wrote:
> >>
> >> > >>>Stop mgmt,
> >> > >>>Stop all agents
> >> > >>>Restart libvirtd (and check libvirt logs afterwards)
> >> > >>>Start agents
> >> > >>>Start mgmt.
> >> >
> >> > I did that numerous time. Nothing really suspicious
> >> > I can see that systems VMs are running - both in cloudstack console
> and
> >> > with virsh list -all
> >> >
> >> > It is apparently problem with network.
> >> > Is there a way to force recreation of defaultGuestNetwork? or force
> >> > recreation of Virtual Router.
> >> > I am unable to delete network, which is supposed to rebuild network
> with
> >> > its router. Thats the issue
> >> >
> >> > The issue with libvirtd was, that eventually at some point it was
> >> updated
> >> > during 4 months of running, and not rebooted. It still worked. We had
> to
> >> > add listen_tcp = 1 for libvirtd to start working again.
> >> >
> >> > On Wed, Mar 20, 2019 at 9:49 AM Andrija Panic <
> andrija.pa...@gmail.com>
> >> > wrote:
> >> >
> >> > > As Sergey suggested... but i would also verify no libvirt issues or
> >> > storage
> >> > > pool issues - so perhaps:
> >> > >
> >> > > Stop mgmt,
> >> > > Stop all agents
> >> > > Restart libvirtd (and check libvirt logs afterwards)
> >> > > Start agents
> >> > > Start mgmt.
> >> > >
> >> > > What was originally issue with libvirtd ?
> >> > > That sounds fishy to me...
> >> > >
> >> > > Andrija
> >> > >
> >> > > On Wed, Mar 20, 2019, 02:15 Sergey Levitskiy 
> >> > wrote:
> >> > >
> >> > > > select * from networks where 

Re: Disaster after maintenance

2019-03-20 Thread Andrija Panic
Just to confirm, you are using Basic Zone in CloudStack, right ?

Can you confirm that router has been actually started/created on KVM side,
again, as requested please post logs (mgmt and agent - and note the time
around which you tried to start VR last time it partially succeeded) - we
can't guess what went wrong without logs.

I would push more effort solving this one, instead of reinstalling - you
might hit the issue again and then it's no good.

Can you confirm your zone/pod/cluster/hosts are all in Enabled state, i.e.
not disabled and hosts connected AND both SSVM and CPVM are
connectedUP/green
Is your dashboard in GUI all green - no issues there ?
Can you connect your both KVM hosts can access/mount both Primary and
Secondary Storage

On Wed, 20 Mar 2019 at 13:15, Jevgeni Zolotarjov 
wrote:

> After dozen of attempts, the Virtual Router could finally be recreated. But
> its in eternal Starting status, and console prompts it required upgrade and
> Version is UNKNOWN
>
> It does not resolve the problem, I cannot move further form this point.
> Any hints?
>
> Or I am condemned to do reinstall cloudstack from scratch?
>
> On Wed, Mar 20, 2019 at 11:08 AM Jevgeni Zolotarjov <
> j.zolotar...@gmail.com>
> wrote:
>
> > Under this defaultGuestNetwork, I go to Virtual Appliances. There is no
> > VMS - "no data to show"
> >
> > I dont have any network, other than this single default one.
> >
> > I've tried adding new network - Add guest network. But I am not able to
> do
> > so, cause in the wizard popup, it offers empty dropdown with Zones
> > selection. And this wizard doesnt not allow to go further without
> selecting
> > Zone
> >
> > On Wed, Mar 20, 2019 at 10:28 AM Andrija Panic 
> > wrote:
> >
> >> you need to delete/remove all VMs inside this network (tick the
> "Expunge"
> >> button during VM deletion - if you want to really delete the VMs) in
> order
> >> to be able to delete the network - OR simply attach this VM to another
> >> network, make this new network a DEFAULT one (NIC that is...), and then
> >> detach from old network - and then effectively your VM was "removed"
> from
> >> old network - after this you should be able to delete the old network. I
> >> assume some DB incosistencies perhaps, being the reason you can not
> >> restart
> >> the network.
> >>
> >> Did you try restarting some other Network - or deploying a new network,
> >> spin a VM in it, then again try to restart this new network - does it
> >> work ?
> >>
> >> Andrija
> >>
> >> On Wed, 20 Mar 2019 at 08:58, Jevgeni Zolotarjov <
> j.zolotar...@gmail.com>
> >> wrote:
> >>
> >> > >>>Stop mgmt,
> >> > >>>Stop all agents
> >> > >>>Restart libvirtd (and check libvirt logs afterwards)
> >> > >>>Start agents
> >> > >>>Start mgmt.
> >> >
> >> > I did that numerous time. Nothing really suspicious
> >> > I can see that systems VMs are running - both in cloudstack console
> and
> >> > with virsh list -all
> >> >
> >> > It is apparently problem with network.
> >> > Is there a way to force recreation of defaultGuestNetwork? or force
> >> > recreation of Virtual Router.
> >> > I am unable to delete network, which is supposed to rebuild network
> with
> >> > its router. Thats the issue
> >> >
> >> > The issue with libvirtd was, that eventually at some point it was
> >> updated
> >> > during 4 months of running, and not rebooted. It still worked. We had
> to
> >> > add listen_tcp = 1 for libvirtd to start working again.
> >> >
> >> > On Wed, Mar 20, 2019 at 9:49 AM Andrija Panic <
> andrija.pa...@gmail.com>
> >> > wrote:
> >> >
> >> > > As Sergey suggested... but i would also verify no libvirt issues or
> >> > storage
> >> > > pool issues - so perhaps:
> >> > >
> >> > > Stop mgmt,
> >> > > Stop all agents
> >> > > Restart libvirtd (and check libvirt logs afterwards)
> >> > > Start agents
> >> > > Start mgmt.
> >> > >
> >> > > What was originally issue with libvirtd ?
> >> > > That sounds fishy to me...
> >> > >
> >> > > Andrija
> >> > >
> >> > > On Wed, Mar 20, 2019, 02:15 Sergey Levitskiy 
> >> > wrote:
> >> > >
> >> > > > select * from networks where removed is null;
> >> > > > select * from vm_instance where id=87;
> >> > > > select id,name from vm_instance where name like 'r%' and removed
> is
> >> > null;
> >> > > >
> >> > > > Basically since the network offering is not redundant this error
> is
> >> > only
> >> > > > thrown when there is no router associated with your network.
> Usually
> >> > > > management server restart tries to implement network again. Please
> >> > > restart
> >> > > > management server, save and share management server log.
> >> > > >
> >> > > >
> >> > > >
> >> > > >
> >> > > > On 3/19/19, 3:31 PM, "Jevgeni Zolotarjov" <
> j.zolotar...@gmail.com>
> >> > > wrote:
> >> > > >
> >> > > > Check network_offering table for  value in column
> >> > > > redundant_router_service  for the network offering you use.
> >> > > > in table network_offering_table all records have
> >> > > > redundant_router_service =

Re: Disaster after maintenance

2019-03-20 Thread Jevgeni Zolotarjov
After dozen of attempts, the Virtual Router could finally be recreated. But
its in eternal Starting status, and console prompts it required upgrade and
Version is UNKNOWN

It does not resolve the problem, I cannot move further form this point.
Any hints?

Or I am condemned to do reinstall cloudstack from scratch?

On Wed, Mar 20, 2019 at 11:08 AM Jevgeni Zolotarjov 
wrote:

> Under this defaultGuestNetwork, I go to Virtual Appliances. There is no
> VMS - "no data to show"
>
> I dont have any network, other than this single default one.
>
> I've tried adding new network - Add guest network. But I am not able to do
> so, cause in the wizard popup, it offers empty dropdown with Zones
> selection. And this wizard doesnt not allow to go further without selecting
> Zone
>
> On Wed, Mar 20, 2019 at 10:28 AM Andrija Panic 
> wrote:
>
>> you need to delete/remove all VMs inside this network (tick the "Expunge"
>> button during VM deletion - if you want to really delete the VMs) in order
>> to be able to delete the network - OR simply attach this VM to another
>> network, make this new network a DEFAULT one (NIC that is...), and then
>> detach from old network - and then effectively your VM was "removed" from
>> old network - after this you should be able to delete the old network. I
>> assume some DB incosistencies perhaps, being the reason you can not
>> restart
>> the network.
>>
>> Did you try restarting some other Network - or deploying a new network,
>> spin a VM in it, then again try to restart this new network - does it
>> work ?
>>
>> Andrija
>>
>> On Wed, 20 Mar 2019 at 08:58, Jevgeni Zolotarjov 
>> wrote:
>>
>> > >>>Stop mgmt,
>> > >>>Stop all agents
>> > >>>Restart libvirtd (and check libvirt logs afterwards)
>> > >>>Start agents
>> > >>>Start mgmt.
>> >
>> > I did that numerous time. Nothing really suspicious
>> > I can see that systems VMs are running - both in cloudstack console and
>> > with virsh list -all
>> >
>> > It is apparently problem with network.
>> > Is there a way to force recreation of defaultGuestNetwork? or force
>> > recreation of Virtual Router.
>> > I am unable to delete network, which is supposed to rebuild network with
>> > its router. Thats the issue
>> >
>> > The issue with libvirtd was, that eventually at some point it was
>> updated
>> > during 4 months of running, and not rebooted. It still worked. We had to
>> > add listen_tcp = 1 for libvirtd to start working again.
>> >
>> > On Wed, Mar 20, 2019 at 9:49 AM Andrija Panic 
>> > wrote:
>> >
>> > > As Sergey suggested... but i would also verify no libvirt issues or
>> > storage
>> > > pool issues - so perhaps:
>> > >
>> > > Stop mgmt,
>> > > Stop all agents
>> > > Restart libvirtd (and check libvirt logs afterwards)
>> > > Start agents
>> > > Start mgmt.
>> > >
>> > > What was originally issue with libvirtd ?
>> > > That sounds fishy to me...
>> > >
>> > > Andrija
>> > >
>> > > On Wed, Mar 20, 2019, 02:15 Sergey Levitskiy 
>> > wrote:
>> > >
>> > > > select * from networks where removed is null;
>> > > > select * from vm_instance where id=87;
>> > > > select id,name from vm_instance where name like 'r%' and removed is
>> > null;
>> > > >
>> > > > Basically since the network offering is not redundant this error is
>> > only
>> > > > thrown when there is no router associated with your network. Usually
>> > > > management server restart tries to implement network again. Please
>> > > restart
>> > > > management server, save and share management server log.
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > On 3/19/19, 3:31 PM, "Jevgeni Zolotarjov" 
>> > > wrote:
>> > > >
>> > > > Check network_offering table for  value in column
>> > > > redundant_router_service  for the network offering you use.
>> > > > in table network_offering_table all records have
>> > > > redundant_router_service =
>> > > > 0
>> > > >
>> > > > Can you also run the following:
>> > > > >>>select name, state, removed  from host where name like 'r%'
>> > > > returns zero rows - nothing
>> > > >
>> > > > >>>select * from domain_router;
>> > > > # id, element_id, public_mac_address, public_ip_address,
>> > > > public_netmask,
>> > > > guest_netmask, guest_ip_address, is_redundant_router, priority,
>> > > > redundant_state, stop_pending, role, template_version,
>> > > scripts_version,
>> > > > vpc_id, update_state
>> > > > '4', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN',
>> '0',
>> > > > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28
>> > UTC
>> > > > 2018',
>> > > > '57db7bd8118977a5f2cd3ef1c7503633\n', NULL, NULL
>> > > > '49', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN',
>> '0',
>> > > > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28
>> > UTC
>> > > > 2018',
>> > > > 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
>> > > > '73', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN',
>> '0',
>> > > > 'VIRTUAL_ROUTER', 

[ANNOUNCEMENT] Apache CloudStack CloudMonkey v6.0.0

2019-03-20 Thread Rohit Yadav
The Apache Software Foundation Announces Apache® CloudMonkey® v6.0.

Popular Open Source Command Line Interface tool that simplifies Apache
CloudStack configuration and management now faster and easier to use.

Wakefield, MA, March 20, 2019 (GLOBE NEWSWIRE) -- The Apache Software
Foundation (ASF), the all-volunteer developers, stewards, and incubators of
more than 350 Open Source projects and initiatives, announced today Apache®
CloudStack® CloudMonkey v6.0, the latest version of the turnkey enterprise
Cloud orchestration platform's command line interface tool.

Apache CloudStack is the proven, highly scalable, and easy-to-deploy IaaS
platform used for rapidly creating private, public, and hybrid Cloud
environments. Thousands of large-scale public Cloud providers and
enterprise organizations use Apache CloudStack to enable billions of
dollars worth of business transactions annually across their clouds.

Apache CloudMonkey v6.0.0 is the latest major release since the previous
major 5.x release in September 2013. CloudMonkey v6.0.0 is a rewrite of the
original tool in Go programming language, and can be used both as an
interactive shell and as a command line tool that simplifies CloudStack
configuration and management.

Some of the new features and major changes include:

- Rewrite in Go, ships as single binary for Linux, Mac, and Windows
- Drop-in replacement for legacy Python-based cloudmonkey
- About 5-20x faster than legacy Python-based cloudmonkey
- Interactive UX for parameter and arg completion and selection
- JSON is the default output format
- New column based output
- Enable debug mode using set debug true option, file-based logging removed
- Per server profile based API cache
- New syntax arg=@/path/to/file to pass the content of file as API argument
value similar to curl
- Improve help docs using -h argument
- Removed: XML output, coloured output, several set options

"This release is the work of over one year of effort and driven by the
people operating CloudStack clouds," said Rohit Yadav, Apache CloudStack
CloudMonkey v6.0 author, and release manager. "I would like to thank the
contributors across all of these organizations for supporting this release,
which reflects both the user-driven nature of our community and the Apache
CloudStack project's commitment to continue to be the most stable, easily
deployable, scalable Open Source platform for IaaS. Along with ease of
installation, usage and availability of cross-platform dependency-free
builds including Windows builds, v6.0 brings many changes and optimizations
such as more interactive shell for parameter completion, faster API
requests processing, server profile specific API caching, improved API help
docs and a new syntax to pass content of files as API parameter argument."
More on the background and story behind the CloudMonkey 6.0 effort can be
found at
https://blogs.apache.org/cloudstack/entry/what-s-coming-in-cloudmonkey

"Apache CloudStack is a significant part of our Cloud portfolio right now –
we run large deployments all over the world, often supporting critical
customer applications," said Robert van der Meulen, Product Strategy Lead
at Leaseweb Global B.V. "CloudMonkey is an invaluable tool for interacting
with CloudStack-based clouds, and it's the go-to tool that we recommend to
our customers when they want to use command-line interaction with our
CloudStack platforms."

"CloudMonkey is an effective tool for the operators of CloudStack
environments and it becomes essential in large-scale CloudStack
deployments," said Giles Sirett, CEO of ShapeBlue. "It's great to see this
new version of CloudMonkey: having a CLI that can run on Windows desktops
as well as Linux and Mac is important as we see more enterprise adoption of
Apache CloudStack."

"CloudMonkey is now written in Golang, and with version v6.0 loading, speed
has been drastically improved (accessing the CLI in under 0.5s)," said
Pierre-Luc Dion, Cloud Architect at Cloud.ca. "This simplifies
installation, deployments, updates, and operational efficiency."

"After many years of managing production Apache CloudStack deployments, I
consider CloudMonkey a core tool in anyone's CloudStack toolkit, and now
also being available for Windows makes me really happy," said Andrija
Panic, Apache CloudStack Committer. "I can certainly see major speed
improvements, but also having backward compatibility is what is so great
with this new release."

Catch Apache CloudStack in action at ApacheCon 9-12 September 2019 in Las
Vegas, Nevada, and at numerous Meetups worldwide, held throughout the year.

# Downloads and Documentation
The official source code for CloudMonkey v6.0.0 can be downloaded from
http://cloudstack.apache.org/downloads.html. The community-maintained
builds are available at the project's Github release page at
https://github.com/apache/cloudstack-cloudmonkey/releases . CloudMonkey's
usage is documented at https://github.com/apache/cloudstack-cloudmonkey/wiki

# Availability 

How to reimport instances from failed cluster

2019-03-20 Thread Jevgeni Zolotarjov
I've come across a problem with cluster, which I cannot manage to resolved.
There is long thread about the problem with failed defaultGuestNetwork

Now:
What I am going to do:
* download all VM volumes
* reinstall Cloudstack from scratch
* upload volumes to new VMs.

How can that be done correctly?
In other words - what is correct way to migrate all VMs? Is there an
official guide for that?


Re: Uploading volume does not work

2019-03-20 Thread Fariborz Navidan
Yes, I have updated it. I can upload templates but not volumes. The
URL is on server's public IP

On 3/19/19, Dag Sonstebo  wrote:
> Fariborz,
>
> Have you updated "secstorage.allowed.internal.sites" to allow upload from
> the internal IP range your management server is on?
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
>
> On 18/03/2019, 17:24, "Fariborz Navidan"  wrote:
>
> Hello
>
> I have a web server on the management server and a qcow2 format volume.
> When request cloudstack to download the volume, it keeps it in
> "UploadNotStarted" state and does not start downloading it to the
> secondary
> storage. I've checked SSVM and secondary storage mount point, free disk
> space, max template size and allowed local ips. Below is the log since
> last
> cloudstack restart up to requesting volume upload
>
>
> 2019-03-18 18:21:13,559 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentManager-Handler-9:null) (logid:) SeqA 21-76888: Processing Seq
> 21-76888:  { Cmd , MgmtId: -1, via: 21, Ver: v1, Flags: 11,
>
> [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":97,"_loadInfo":"{\n
> \"connections\": []\n}","wait":0}}] }
> 2019-03-18 18:21:13,560 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentManager-Handler-9:null) (logid:) SeqA 21-76888: Sending Seq
> 21-76888:  { Ans: , MgmtId: 279278805450774, via: 21, Ver: v1, Flags:
> 100010,
> [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] }
> 2019-03-18 18:21:20,616 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-3179a01e) (logid:33ac3532) Begin cleanup
> expired async-jobs
> 2019-03-18 18:21:20,619 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-3179a01e) (logid:33ac3532) End cleanup
> expired
> async-jobs
> 2019-03-18 18:21:23,284 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
> (Timer-8:ctx-88fad0b1) (logid:ef76ea22) getCommandHostDelegation: class
> org.apache.cloudstack.storage.command.DownloadProgressCommand
> 2019-03-18 18:21:23,285 DEBUG [c.c.h.XenServerGuru]
> (Timer-8:ctx-88fad0b1)
> (logid:ef76ea22) We are returning the default host to execute commands
> because the command is not of Copy type.
> 2019-03-18 18:21:23,285 DEBUG [o.a.c.s.RemoteHostEndPoint]
> (Timer-8:ctx-88fad0b1) (logid:ef76ea22) Sending command
> org.apache.cloudstack.storage.command.DownloadProgressCommand to host:
> 22
> 2019-03-18 18:21:23,285 DEBUG [c.c.a.t.Request] (Timer-8:ctx-88fad0b1)
> (logid:ef76ea22) Seq 22-6353734649289638105: Sending  { Cmd , MgmtId:
> 279278805450774, via: 22(s-96-VM), Ver: v1, Flags: 100011,
>
> [{"org.apache.cloudstack.storage.command.DownloadProgressCommand":{"jobId":"46ad04c0-44d1-4f79-ac61-189c04f00564","request":"GET_STATUS","hvm":false,"maxDownloadSizeInBytes":536870912000,"id":125,"resourceType":"VOLUME","installPath":"volumes/2/125","_store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
> 192.168.0.1/home2/secondary","_role":"Image"}},"url":"
>
> http://178.33.230.41/vps/Younes.qcow2","format":"QCOW2","accountId":2,"name":"DATA-Younes","wait":0}}]
> }
> 2019-03-18 18:21:23,330 DEBUG [c.c.a.t.Request]
> (AgentManager-Handler-10:null) (logid:) Seq 22-6353734649289638105:
> Processing:  { Ans: , MgmtId: 279278805450774, via: 22, Ver: v1, Flags:
> 10,
>
> [{"com.cloud.agent.api.storage.DownloadAnswer":{"jobId":"46ad04c0-44d1-4f79-ac61-189c04f00564","downloadPct":0,"errorString":"
>
> ","downloadStatus":"NOT_DOWNLOADED","downloadPath":"/mnt/SecStorage/cf5552bd-cb13-38f0-b01e-1f9d16cd1924/volumes/2/125/dnld1738121055299197614tmp_","installPath":"volumes/2/125","templateSize":0,"templatePhySicalSize":0,"result":true,"details":"
> ","wait":0}}] }
> 2019-03-18 18:21:23,560 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentManager-Handler-12:null) (logid:) SeqA 21-76889: Processing Seq
> 21-76889:  { Cmd , MgmtId: -1, via: 21, Ver: v1, Flags: 11,
>
> [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":97,"_loadInfo":"{\n
> \"connections\": []\n}","wait":0}}] }
> 2019-03-18 18:21:23,561 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentManager-Handler-12:null) (logid:) SeqA 21-76889: Sending Seq
> 21-76889:  { Ans: , MgmtId: 279278805450774, via: 21, Ver: v1, Flags:
> 100010,
> [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] }
> 2019-03-18 18:21:30,616 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-e38890d7) (logid:6a68f0ff) Begin cleanup
> expired async-jobs
> 2019-03-18 18:21:30,619 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> (AsyncJobMgr-Heartbeat-1:ctx-e38890d7) (logid:6a68f0ff) End cleanup
> expired
> async-jobs
> 2019-03-18 18:21:33,332 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
> (Timer-8:ctx-9960a19f) (logid:d57bfd75) getCommandHostDelegation: class
> org.apache.cloudstack.storage.command.DownloadProgressCommand
> 

Re: Disaster after maintenance

2019-03-20 Thread Jevgeni Zolotarjov
Under this defaultGuestNetwork, I go to Virtual Appliances. There is no VMS
- "no data to show"

I dont have any network, other than this single default one.

I've tried adding new network - Add guest network. But I am not able to do
so, cause in the wizard popup, it offers empty dropdown with Zones
selection. And this wizard doesnt not allow to go further without selecting
Zone

On Wed, Mar 20, 2019 at 10:28 AM Andrija Panic 
wrote:

> you need to delete/remove all VMs inside this network (tick the "Expunge"
> button during VM deletion - if you want to really delete the VMs) in order
> to be able to delete the network - OR simply attach this VM to another
> network, make this new network a DEFAULT one (NIC that is...), and then
> detach from old network - and then effectively your VM was "removed" from
> old network - after this you should be able to delete the old network. I
> assume some DB incosistencies perhaps, being the reason you can not restart
> the network.
>
> Did you try restarting some other Network - or deploying a new network,
> spin a VM in it, then again try to restart this new network - does it work
> ?
>
> Andrija
>
> On Wed, 20 Mar 2019 at 08:58, Jevgeni Zolotarjov 
> wrote:
>
> > >>>Stop mgmt,
> > >>>Stop all agents
> > >>>Restart libvirtd (and check libvirt logs afterwards)
> > >>>Start agents
> > >>>Start mgmt.
> >
> > I did that numerous time. Nothing really suspicious
> > I can see that systems VMs are running - both in cloudstack console and
> > with virsh list -all
> >
> > It is apparently problem with network.
> > Is there a way to force recreation of defaultGuestNetwork? or force
> > recreation of Virtual Router.
> > I am unable to delete network, which is supposed to rebuild network with
> > its router. Thats the issue
> >
> > The issue with libvirtd was, that eventually at some point it was updated
> > during 4 months of running, and not rebooted. It still worked. We had to
> > add listen_tcp = 1 for libvirtd to start working again.
> >
> > On Wed, Mar 20, 2019 at 9:49 AM Andrija Panic 
> > wrote:
> >
> > > As Sergey suggested... but i would also verify no libvirt issues or
> > storage
> > > pool issues - so perhaps:
> > >
> > > Stop mgmt,
> > > Stop all agents
> > > Restart libvirtd (and check libvirt logs afterwards)
> > > Start agents
> > > Start mgmt.
> > >
> > > What was originally issue with libvirtd ?
> > > That sounds fishy to me...
> > >
> > > Andrija
> > >
> > > On Wed, Mar 20, 2019, 02:15 Sergey Levitskiy 
> > wrote:
> > >
> > > > select * from networks where removed is null;
> > > > select * from vm_instance where id=87;
> > > > select id,name from vm_instance where name like 'r%' and removed is
> > null;
> > > >
> > > > Basically since the network offering is not redundant this error is
> > only
> > > > thrown when there is no router associated with your network. Usually
> > > > management server restart tries to implement network again. Please
> > > restart
> > > > management server, save and share management server log.
> > > >
> > > >
> > > >
> > > >
> > > > On 3/19/19, 3:31 PM, "Jevgeni Zolotarjov" 
> > > wrote:
> > > >
> > > > Check network_offering table for  value in column
> > > > redundant_router_service  for the network offering you use.
> > > > in table network_offering_table all records have
> > > > redundant_router_service =
> > > > 0
> > > >
> > > > Can you also run the following:
> > > > >>>select name, state, removed  from host where name like 'r%'
> > > > returns zero rows - nothing
> > > >
> > > > >>>select * from domain_router;
> > > > # id, element_id, public_mac_address, public_ip_address,
> > > > public_netmask,
> > > > guest_netmask, guest_ip_address, is_redundant_router, priority,
> > > > redundant_state, stop_pending, role, template_version,
> > > scripts_version,
> > > > vpc_id, update_state
> > > > '4', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN',
> '0',
> > > > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28
> > UTC
> > > > 2018',
> > > > '57db7bd8118977a5f2cd3ef1c7503633\n', NULL, NULL
> > > > '49', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN',
> '0',
> > > > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28
> > UTC
> > > > 2018',
> > > > 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> > > > '73', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN',
> '0',
> > > > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28
> > UTC
> > > > 2018',
> > > > 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> > > > '74', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN',
> '0',
> > > > 'VIRTUAL_ROUTER', NULL, NULL, NULL, NULL
> > > > '75', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN',
> '0',
> > > > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28
> > UTC
> > > > 2018',
> > > > 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> > > > 

Re: Disaster after maintenance

2019-03-20 Thread Andrija Panic
you need to delete/remove all VMs inside this network (tick the "Expunge"
button during VM deletion - if you want to really delete the VMs) in order
to be able to delete the network - OR simply attach this VM to another
network, make this new network a DEFAULT one (NIC that is...), and then
detach from old network - and then effectively your VM was "removed" from
old network - after this you should be able to delete the old network. I
assume some DB incosistencies perhaps, being the reason you can not restart
the network.

Did you try restarting some other Network - or deploying a new network,
spin a VM in it, then again try to restart this new network - does it work ?

Andrija

On Wed, 20 Mar 2019 at 08:58, Jevgeni Zolotarjov 
wrote:

> >>>Stop mgmt,
> >>>Stop all agents
> >>>Restart libvirtd (and check libvirt logs afterwards)
> >>>Start agents
> >>>Start mgmt.
>
> I did that numerous time. Nothing really suspicious
> I can see that systems VMs are running - both in cloudstack console and
> with virsh list -all
>
> It is apparently problem with network.
> Is there a way to force recreation of defaultGuestNetwork? or force
> recreation of Virtual Router.
> I am unable to delete network, which is supposed to rebuild network with
> its router. Thats the issue
>
> The issue with libvirtd was, that eventually at some point it was updated
> during 4 months of running, and not rebooted. It still worked. We had to
> add listen_tcp = 1 for libvirtd to start working again.
>
> On Wed, Mar 20, 2019 at 9:49 AM Andrija Panic 
> wrote:
>
> > As Sergey suggested... but i would also verify no libvirt issues or
> storage
> > pool issues - so perhaps:
> >
> > Stop mgmt,
> > Stop all agents
> > Restart libvirtd (and check libvirt logs afterwards)
> > Start agents
> > Start mgmt.
> >
> > What was originally issue with libvirtd ?
> > That sounds fishy to me...
> >
> > Andrija
> >
> > On Wed, Mar 20, 2019, 02:15 Sergey Levitskiy 
> wrote:
> >
> > > select * from networks where removed is null;
> > > select * from vm_instance where id=87;
> > > select id,name from vm_instance where name like 'r%' and removed is
> null;
> > >
> > > Basically since the network offering is not redundant this error is
> only
> > > thrown when there is no router associated with your network. Usually
> > > management server restart tries to implement network again. Please
> > restart
> > > management server, save and share management server log.
> > >
> > >
> > >
> > >
> > > On 3/19/19, 3:31 PM, "Jevgeni Zolotarjov" 
> > wrote:
> > >
> > > Check network_offering table for  value in column
> > > redundant_router_service  for the network offering you use.
> > > in table network_offering_table all records have
> > > redundant_router_service =
> > > 0
> > >
> > > Can you also run the following:
> > > >>>select name, state, removed  from host where name like 'r%'
> > > returns zero rows - nothing
> > >
> > > >>>select * from domain_router;
> > > # id, element_id, public_mac_address, public_ip_address,
> > > public_netmask,
> > > guest_netmask, guest_ip_address, is_redundant_router, priority,
> > > redundant_state, stop_pending, role, template_version,
> > scripts_version,
> > > vpc_id, update_state
> > > '4', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28
> UTC
> > > 2018',
> > > '57db7bd8118977a5f2cd3ef1c7503633\n', NULL, NULL
> > > '49', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28
> UTC
> > > 2018',
> > > 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> > > '73', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28
> UTC
> > > 2018',
> > > 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> > > '74', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > > 'VIRTUAL_ROUTER', NULL, NULL, NULL, NULL
> > > '75', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28
> UTC
> > > 2018',
> > > 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> > > '76', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28
> UTC
> > > 2018',
> > > 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> > > '77', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.1 Fri Jun 22 07:52:17
> UTC
> > > 2018',
> > > 'c03a474302d89fa82d345e10fe4cb751\n', NULL, 'UPDATE_FAILED'
> > > '80', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.1 Fri Jun 22 07:52:17
> UTC
> > > 2018',
> > > 

Re: Disaster after maintenance

2019-03-20 Thread Jevgeni Zolotarjov
>>>Stop mgmt,
>>>Stop all agents
>>>Restart libvirtd (and check libvirt logs afterwards)
>>>Start agents
>>>Start mgmt.

I did that numerous time. Nothing really suspicious
I can see that systems VMs are running - both in cloudstack console and
with virsh list -all

It is apparently problem with network.
Is there a way to force recreation of defaultGuestNetwork? or force
recreation of Virtual Router.
I am unable to delete network, which is supposed to rebuild network with
its router. Thats the issue

The issue with libvirtd was, that eventually at some point it was updated
during 4 months of running, and not rebooted. It still worked. We had to
add listen_tcp = 1 for libvirtd to start working again.

On Wed, Mar 20, 2019 at 9:49 AM Andrija Panic 
wrote:

> As Sergey suggested... but i would also verify no libvirt issues or storage
> pool issues - so perhaps:
>
> Stop mgmt,
> Stop all agents
> Restart libvirtd (and check libvirt logs afterwards)
> Start agents
> Start mgmt.
>
> What was originally issue with libvirtd ?
> That sounds fishy to me...
>
> Andrija
>
> On Wed, Mar 20, 2019, 02:15 Sergey Levitskiy  wrote:
>
> > select * from networks where removed is null;
> > select * from vm_instance where id=87;
> > select id,name from vm_instance where name like 'r%' and removed is null;
> >
> > Basically since the network offering is not redundant this error is only
> > thrown when there is no router associated with your network. Usually
> > management server restart tries to implement network again. Please
> restart
> > management server, save and share management server log.
> >
> >
> >
> >
> > On 3/19/19, 3:31 PM, "Jevgeni Zolotarjov" 
> wrote:
> >
> > Check network_offering table for  value in column
> > redundant_router_service  for the network offering you use.
> > in table network_offering_table all records have
> > redundant_router_service =
> > 0
> >
> > Can you also run the following:
> > >>>select name, state, removed  from host where name like 'r%'
> > returns zero rows - nothing
> >
> > >>>select * from domain_router;
> > # id, element_id, public_mac_address, public_ip_address,
> > public_netmask,
> > guest_netmask, guest_ip_address, is_redundant_router, priority,
> > redundant_state, stop_pending, role, template_version,
> scripts_version,
> > vpc_id, update_state
> > '4', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28 UTC
> > 2018',
> > '57db7bd8118977a5f2cd3ef1c7503633\n', NULL, NULL
> > '49', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28 UTC
> > 2018',
> > 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> > '73', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28 UTC
> > 2018',
> > 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> > '74', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > 'VIRTUAL_ROUTER', NULL, NULL, NULL, NULL
> > '75', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28 UTC
> > 2018',
> > 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> > '76', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28 UTC
> > 2018',
> > 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> > '77', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.1 Fri Jun 22 07:52:17 UTC
> > 2018',
> > 'c03a474302d89fa82d345e10fe4cb751\n', NULL, 'UPDATE_FAILED'
> > '80', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.1 Fri Jun 22 07:52:17 UTC
> > 2018',
> > 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> > '85', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.1 Fri Jun 22 07:52:17 UTC
> > 2018',
> > 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> > '86', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > 'VIRTUAL_ROUTER', NULL, NULL, NULL, NULL
> > '87', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> > 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.2.0 Mon Nov 12 15:06:49
> UTC
> > 2018', '873057731ff2cba4a1f3b2411765407c\n', NULL, NULL
> >
> >
> > >>>select * from router_network_ref;
> > # id, router_id, network_id, guest_type
> > '1', '4', '204', 'Shared'
> > '2', '49', '204', 'Shared'
> > '3', '73', '204', 'Shared'
> > '4', '75', '204', 'Shared'
> > '5', '76', '204', 'Shared'
> > '6', '77', '204', 'Shared'
> > '7', '80', '204', 'Shared'
> > '8', '85', '204', 'Shared'

Re: Disaster after maintenance

2019-03-20 Thread Andrija Panic
As Sergey suggested... but i would also verify no libvirt issues or storage
pool issues - so perhaps:

Stop mgmt,
Stop all agents
Restart libvirtd (and check libvirt logs afterwards)
Start agents
Start mgmt.

What was originally issue with libvirtd ?
That sounds fishy to me...

Andrija

On Wed, Mar 20, 2019, 02:15 Sergey Levitskiy  wrote:

> select * from networks where removed is null;
> select * from vm_instance where id=87;
> select id,name from vm_instance where name like 'r%' and removed is null;
>
> Basically since the network offering is not redundant this error is only
> thrown when there is no router associated with your network. Usually
> management server restart tries to implement network again. Please restart
> management server, save and share management server log.
>
>
>
>
> On 3/19/19, 3:31 PM, "Jevgeni Zolotarjov"  wrote:
>
> Check network_offering table for  value in column
> redundant_router_service  for the network offering you use.
> in table network_offering_table all records have
> redundant_router_service =
> 0
>
> Can you also run the following:
> >>>select name, state, removed  from host where name like 'r%'
> returns zero rows - nothing
>
> >>>select * from domain_router;
> # id, element_id, public_mac_address, public_ip_address,
> public_netmask,
> guest_netmask, guest_ip_address, is_redundant_router, priority,
> redundant_state, stop_pending, role, template_version, scripts_version,
> vpc_id, update_state
> '4', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28 UTC
> 2018',
> '57db7bd8118977a5f2cd3ef1c7503633\n', NULL, NULL
> '49', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28 UTC
> 2018',
> 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> '73', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28 UTC
> 2018',
> 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> '74', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> 'VIRTUAL_ROUTER', NULL, NULL, NULL, NULL
> '75', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28 UTC
> 2018',
> 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> '76', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.0 Sun Jan 14 15:37:28 UTC
> 2018',
> 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> '77', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.1 Fri Jun 22 07:52:17 UTC
> 2018',
> 'c03a474302d89fa82d345e10fe4cb751\n', NULL, 'UPDATE_FAILED'
> '80', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.1 Fri Jun 22 07:52:17 UTC
> 2018',
> 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> '85', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.1 Fri Jun 22 07:52:17 UTC
> 2018',
> 'c03a474302d89fa82d345e10fe4cb751\n', NULL, NULL
> '86', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> 'VIRTUAL_ROUTER', NULL, NULL, NULL, NULL
> '87', '1', NULL, NULL, NULL, NULL, NULL, '0', NULL, 'UNKNOWN', '0',
> 'VIRTUAL_ROUTER', 'Cloudstack Release 4.11.2.0 Mon Nov 12 15:06:49 UTC
> 2018', '873057731ff2cba4a1f3b2411765407c\n', NULL, NULL
>
>
> >>>select * from router_network_ref;
> # id, router_id, network_id, guest_type
> '1', '4', '204', 'Shared'
> '2', '49', '204', 'Shared'
> '3', '73', '204', 'Shared'
> '4', '75', '204', 'Shared'
> '5', '76', '204', 'Shared'
> '6', '77', '204', 'Shared'
> '7', '80', '204', 'Shared'
> '8', '85', '204', 'Shared'
> '9', '86', '204', 'Shared'
> '10', '87', '204', 'Shared'
>
>
> On Wed, Mar 20, 2019 at 12:18 AM Sergey Levitskiy  >
> wrote:
>
> > Check network_offering table for  value in column
> > redundant_router_service  for the network offering you use.
> > Can you also run the following:
> > select name, state, removed  from host where name like 'r%'
> > select * from domain_router;
> > select * from router_network_ref;
> >
> > Cloudstack is supposed to recreate you VR. If it is not happening
> there is
> > something fundamentally wrong. I would advise to destroy your VR
> again.
> > Stop you management server. Rotate management server log and start it
> > again. If your VR doesn't start in few min, post your complete
> management
> > server log  and agent log again.
> >
> >
> >
> >
> > On 3/19/19, 2:56 PM, "Jevgeni Zolotarjov" 
> wrote:
> >
> > >>>Network 

Re: [RESULT][VOTE] Apache CloudStack 4.12.0.0

2019-03-20 Thread Andrija Panic
Good work Gabriel !

On Tue, Mar 19, 2019, 22:37 Gabriel Beims Bräscher 
wrote:

> Hi all,
>
> After 3 business days, the vote for CloudStack 4.12.0.0 *passes* with 4 PMC
> + 2 non-PMC votes.
>
> +1 (PMC / binding)
> * Wido den Hollander
> * Simon Weller
> * Rafael Weingärtner
> * Rohit Yadav
>
> +1 (nonbinding)
> * Gabriel Bräscher
> * Nicolas Vazquez
>
> 0
> none
>
> -1
> none
>
> Thanks to everyone participating.
>
> I will now prepare the release announcement to go out after 24 hours to
> give the mirrors time to catch up.
>
> Best regards,
> Gabriel
>


Re: cloudshell for individual users

2019-03-20 Thread Andrija Panic
Hi Richard,

No built-in way of doing so, but you could spin a new VM template and be
creative, possibly with user-data etc. It boils down, as you know, on
having CloudMonkey (we have just released a v6.0 writen in Go !) Available
to user...

Andrija

On Wed, Mar 20, 2019, 03:23 Richard Persaud 
wrote:

> Hello,
>
> How can I enable a "cloudshell" instance for each account/user so they are
> able to manage their VPCs via the CLI? It would be great to offer a
> cloudMonkey cloudshell instance - similar to how Azure and GCP offer CLI
> options.
>
> Regards,
>
> Richard Persaud
>
>