Re: Testing CS 4.13.1

2020-05-19 Thread Luis Martinez

Hi, did you get the log file I attached? any advice?

Thank you

On 5/18/2020 4:59 PM, Sergey Levitskiy wrote:

Plz share full management-server.log since your snippet doesn't have relevant 
lines.

Thanks,
Sergey


On 5/18/20, 1:25 PM, "Luis Martinez"  wrote:

 This is the error I see in the logs, I am trying to ssh to the VM for
 secondary storage but i am not able to do it.


 2020-05-18 16:23:26,778 DEBUG [c.c.c.ConsoleProxyManagerImpl]
 (consoleproxy-1:ctx-195a645d) (logid:e873b8f5) Zone 1 is ready to launch
 console proxy
 2020-05-18 16:23:29,820 DEBUG [c.c.s.StatsCollector]
 (StatsCollector-2:ctx-8b9729d8) (logid:d7872342) AutoScaling Monitor is
 running...
 2020-05-18 16:23:30,138 DEBUG [c.c.s.StatsCollector]
 (StatsCollector-5:ctx-a1b22863) (logid:bd79141a) HostStatsCollector is
 running...
 2020-05-18 16:23:30,195 DEBUG [c.c.a.t.Request]
 (StatsCollector-5:ctx-a1b22863) (logid:bd79141a) Seq
 1-1544453197211369507: Received:  { Ans: , MgmtId: 266682308174733, via:
 1(cloudstackvm1), Ver: v1, Flags: 10, { GetHostStatsAnswer } }
 2020-05-18 16:23:30,522 DEBUG [c.c.s.StatsCollector]
 (StatsCollector-3:ctx-ef56a510) (logid:1a0c1267) StorageCollector is
 running...
 2020-05-18 16:23:30,530 DEBUG [c.c.s.StatsCollector]
 (StatsCollector-3:ctx-ef56a510) (logid:1a0c1267) *There is no secondary
 storage VM for secondary storage host nfs://10.0.8.10/mnt/CS01/secondary*
 2020-05-18 16:23:30,533 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
 (StatsCollector-3:ctx-ef56a510) (logid:1a0c1267)
 getCommandHostDelegation: class com.cloud.agent.api.GetStorageStatsCommand
 2020-05-18 16:23:30,533 DEBUG [c.c.h.XenServerGuru]
 (StatsCollector-3:ctx-ef56a510) (logid:1a0c1267) We are returning the
 default host to execute commands because the command is not of Copy type.
 2020-05-18 16:23:30,612 DEBUG [c.c.a.t.Request]
 (StatsCollector-3:ctx-ef56a510) (logid:1a0c1267) Seq
 1-1544453197211369508: Received:  { Ans: , MgmtId: 266682308174733, via:
 1(cloudstackvm1), Ver: v1, Flags: 10, { GetStorageStatsAnswer } }
 2020-05-18 16:23:33,611 DEBUG [c.c.a.m.AgentManagerImpl]
 (AgentManager-Handler-3:null) (logid:) Ping from Routing host
 1(cloudstackvm1)

 On 5/18/2020 2:43 PM, Rohit Yadav wrote:
 > Hi Luis,
 >
 > Please use the correct systemvmtemplate version/link for CloudStack 
4.13.1.0.
 >
 > Refer to the install/upgrade docs such as 
http://docs.cloudstack.apache.org/en/4.13.1.0/upgrading/upgrade/upgrade-4.12.html
 >
 > Regards.
 >
 > Regards,
 > Rohit Yadav
 >
 > 
 > From: Luis Martinez 
 > Sent: Monday, May 18, 2020 11:07:20 PM
 > To: users@cloudstack.apache.org 
 > Subject: Testing CS 4.13.1
 >
 > Hi Group
 >
 > I need help, I am installing 4.13.1 for testing, installation is fine
 > but secondary storage is not working, I tried to ssh to the VM and I am
 > unable to do it, I used the following lines in different installations
 > to see if this fixes the problem but no. how can I troubleshoot this? or
 > am I using the wrong version?
 >
 >
 > 
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt
 > -m /mnt/secondary -u
 > 
http://download.cloudstack.org/systemvm/4.11/systemvmtemplate-4.11.3-kvm.qcow2.bz2
 > -h kvm -F
 > 
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt
 > -m /mnt/secondary -u
 > 
http://download.cloudstack.org/systemvm/4.11/systemvmtemplate-4.11.1-kvm.qcow2.bz2
 > -h kvm -F
 > 
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt
 > -m /mnt/secondary -u
 > 
http://download.cloudstack.org/systemvm/4.11/systemvmtemplate-4.11.0-kvm.qcow2.bz2
 > -h kvm -F
 >
 > Thank you in Advance.
 >
 >
 > rohit.ya...@shapeblue.com
 > www.shapeblue.com
 > 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
 > @shapeblue
 >
 >
 >



Management Server and hosts in different networks

2020-05-19 Thread info

Hello,

I want to start with cloudstack, with two hosts servers (from OVH), and of
course 1 management server (that can be in OVH or in another dedicated server 
or VPS provider).

Must be all 3 servers linked with the feature that OVH name "vlink"? (RISE 
server, the cheapest, do now have such feature. The Advanced and Infraestructure servers, 
have it).

Can I for example have the master in one datacenter (in a hetzner VPS for 
example), one host in OVH USA, and another host in OVH France?

Thank you in advance for your help!

Best wishes,

Augusto



--
El software de antivirus Avast ha analizado este correo electrónico en busca de 
virus.
https://www.avast.com/antivirus



Re: Mangement Server and hosts in different networks

2020-05-19 Thread info

Thank you Sina,

But I did a mistake, I wanted to write Cloudstack and I write Openstack 
by error.


Thanks,

Augusto


On 20/05/2020 0:17, Sina Kashipazha wrote:

Dear Augosto,

Here is the Cloudstack mailing list, not Openstack. Cloudstack and Openstack 
are doing the same thing but in a completely different way. You can see them as 
Android and iOS. It is better to ask your question in the Openstack mailing 
list.

Kind Regards,
Sina


On 19 May 2020, at 22:06, i...@defendhosting.com wrote:


Hello,

I want to start with openstack with 2 hosts/nodes, from OVH and of
course 1 management server/master.

Must be all 3 servers linked with the product that OVH naed as "vlink"?.

Can I for example have the master in one datacenter (in a hetzner VPS
for example), one host in OVH USA, and another host in OVH France?

Thank you in advance for your help!

Best wishes,

Augusto


--
El software de antivirus Avast ha analizado este correo electrónico en busca de 
virus.
https://www.avast.com/antivirus


--

█ DEFEND HOSTING - www.defendhosting.com - A Company You Can Trust
█ Shared Hosting | VPS | Dedicated Servers | SEO | PBX, CRM and SEO Servers
█ USA/EU Locations | Fast Network/Server Backbone Connection
█ Best Value For Your Money | Outstanding Uptime | Customer Tailored 
Support | Money-Back Guarantee


--
El software de antivirus Avast ha analizado este correo electrónico en busca de 
virus.
https://www.avast.com/antivirus



RE: VirtIO Network Adapter for system vms on KVM Hypervisor

2020-05-19 Thread Sean Lair
Just for feedback, we are 4.11.3 and run KVM on CentOS 7.  Our 4.11.3 template 
is set to Debian GNU/Linux 8 (64-bit).  Our lspci is shown below:

root@r-281-VM:~# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] 
(rev 01)
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Cirrus Logic GD 5446
00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
00:04.0 Communication controller: Red Hat, Inc Virtio console
00:05.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:06.0 System peripheral: Intel Corporation 6300ESB Watchdog Timer
00:07.0 Ethernet controller: Red Hat, Inc Virtio network device
00:08.0 Ethernet controller: Red Hat, Inc Virtio network device
00:09.0 Ethernet controller: Red Hat, Inc Virtio network device


-Original Message-
From: Andrija Panic  
Sent: Saturday, May 16, 2020 7:12 AM
To: users 
Subject: Re: VirtIO Network Adapter for system vms on KVM Hypervisor

Thanks for the feedback on that one, Rafal.

Regards

On Sat, 16 May 2020 at 12:58, Rafal Turkiewicz  wrote:

> Just for a record
>
> I have tested this with Debian GNU/Linux 7.0 (64-bit) OS Type and it 
> also worked. It basically breaks as soon as I pick Debian GNU/Linux 8 
> (64-bit).
>
> Thanks
>
> On 2020/05/15 14:00:53, Rafal Turkiewicz  wrote:
> > Andrija,
> >
> > You are the man! I have changed the OS Type to the default Debian 5 
> > x64
> and boom! All sorted.
> >
> > It's really odd that picking older OS Type solved the issue where in
> fact the systemVM is running Debian 9. Is this a BUG of some sort?
> >
> > I might try and experiment with other OS Type Debian version X to 
> > see
> where it falls but for now I'm all happy!
> >
> > Once again thank you very much for the pointer!
> >
> > Raf
> >
> > On 2020/05/15 13:51:01, Andrija Panic  wrote:
> > > In the upgrade guide, we always advise (when registering the new
> systeVM
> > > template) to go as:
> > >
> > >   OS Type: Debian GNU/Linux 7.0 (64-bit) (or the highest 
> > > Debian
> release
> > > number available in the dropdown)
> > >
> > > That being said, in the clean 4.13 installation, the OS type is 
> > > set to Debian 5 x64 - so try each version and in between destroy VR (i.e.
> restart
> > > the network with cleanup) and observe "lspci" if virtio or intel 
> > > NICs
> - but
> > > also make sure that each time the VR is created on KVM host (i.e. 
> > > not
> on
> > > XEN).
> > >
> > > In order to change OS type for systemVM template, you will have to 
> > > use
> DB
> > > - modify the "vm_template" table - update the "guest_os_id" field
> value for
> > > that specific template, to the ID from the "guest_os" table where 
> > > name=Debian XXX 64.
> > >
> > > Hope that solves the issue - should by all means.
> > >
> > > Regards
> > > Andrija
> > >
> > >
> > > On Fri, 15 May 2020 at 15:33, Rafal Turkiewicz 
> wrote:
> > >
> > > > Hello Andrija,
> > > >
> > > > Thanks for your input the OS Type for the systemVM template is 
> > > > set to "Debian GNU/Linux 8 (64-bit)"
> > > >
> > > > I think I forgot to mention a very important aspect of my setup. 
> > > > This Cloudstack instance is powering XenServer and KVM where KVM 
> > > > was added recently.
> > > >
> > > > Your message made me think and look at my other (test lab) setup
> where
> > > > CloudStack is only powering KVM hypervisors. I can confirm all 
> > > > VRs
> are
> > > > running with virtio which implies there got to be something on 
> > > > the
> my mixed
> > > > HV CloudStack.
> > > >
> > > > I will keep looking into this but if you have any further 
> > > > thoughts
> on this
> > > > please let me know.
> > > >
> > > > Raf
> > > >
> > > > On 2020/05/15 11:14:37, Andrija Panic 
> wrote:
> > > > > Rafal,
> > > > >
> > > > > what is the OS type you defined for the systemVM template?
> > > > >
> > > > > In my env, VR (VPC) - all interfaces are VirtIO.
> > > > >
> > > > > Best
> > > > > Andrija
> > > > >
> > > > > On Fri, 15 May 2020 at 12:14, Rafal Turkiewicz 
> > > > > 
> > > > wrote:
> > > > >
> > > > > > Platform:
> > > > > > CloudStack 4.11.2 on CentOS 7 KVM Hypervisor on CentOS 7
> > > > > >
> > > > > > I have found some throughput issues on our VirtualRuters and 
> > > > > > I've
> > > > tracked
> > > > > > it down to CPU IRQ hitting 99% on the VR which was related 
> > > > > > to NIC interrupts.
> > > > > >
> > > > > > I decided to lookup what NIC is being emulated on the VRs; 
> > > > > > lsmod
> listed
> > > > > > three Intel NICs:
> > > > > >
> > > > > > 00:03.0 Ethernet controller: Intel Corporation 82540EM 
> > > > > > Gigabit
> Ethernet
> > > > > > Controller (rev 03)
> > > > > > 00:04.0 Ethernet controller: Intel Corporation 82540EM 
> > > > > > 

RE: Virtual machines volume lock manager

2020-05-19 Thread Sean Lair
Are you using NFS?

Yea, we implmented locking because of that problem:

https://libvirt.org/locking-lockd.html

echo lock_manager = \"lockd\" >> /etc/libvirt/qemu.conf

-Original Message-
From: Andrija Panic  
Sent: Wednesday, October 30, 2019 6:55 AM
To: dev 
Cc: users 
Subject: Re: Virtual machines volume lock manager

I would advise trying to reproduce.

start migration, then either:
- configure timeout so that it''s way too low, so that migration fails due to 
timeouts.
- restart mgmt server in the middle of migrations This should cause migration 
to fail - and you can observe if you have reproduced the problem.
keep in mind, that there might be some garbage left, due to not-properly 
handling the failed migration But from QEMU point of view - if migration fails, 
by all means the new VM should be destroyed...



On Wed, 30 Oct 2019 at 11:31, Rakesh Venkatesh 

wrote:

> Hi Andrija
>
>
> Sorry for the late reply.
>
> Im using 4.7 version of ACS. Qemu version 1:2.5+dfsg-5ubuntu10.40
>
> Im not sure if ACS job failed or libvirt job as I didnt see into logs.
> Yes the vm will be in paused state during migration but after the 
> failed migration, the same vm was in "running" state on two different 
> hypervisors.
> We wrote a script to find out how duplicated vm's are running and 
> found out that more than 5 vm's had this issue.
>
>
> On Mon, Oct 28, 2019 at 2:42 PM Andrija Panic 
> 
> wrote:
>
> > I've been running KVM public cloud up to recently and have never 
> > seen
> such
> > behaviour.
> >
> > What versions (ACS, qemu, libvrit) are you running?
> >
> > How does the migration fail - ACS job - or libvirt job?
> > destination VM is by default always in PAUSED state, until the 
> > migration
> is
> > finished - only then the destination VM (on the new host) will get
> RUNNING,
> > while previously pausing the original VM (on the old host).
> >
> > i,e.
> > phase1  source vm RUNNING, destination vm PAUSED (RAM content being
> > copied over... takes time...)
> > phase2  source vm PAUSED, destination vm PAUSED (last bits of RAM
> > content are migrated)
> > phase3  source vm destroyed, destination VM RUNNING.
> >
> > Andrija
> >
> > On Mon, 28 Oct 2019 at 14:26, Rakesh Venkatesh <
> http://sea.ippathways.com:32224/?dmVyPTEuMDAxJiYzM2ZmODRmOWFhMzdmZmQ1O
> T01REI5N0ExQV84NTE5N18yMDM4OV8xJiZjZjE2YzBlNTI0N2VmMjM9MTIzMyYmdXJsPXd
> 3dyUyRXJha2VzaHYlMkVjb20=@gmail.com>
> > wrote:
> >
> > > Hello Users
> > >
> > >
> > > Recently we have seen cases where when the Vm migration fails,
> cloudstack
> > > ends up running two instances of the same VM on different hypervisors.
> > The
> > > state will be "running" and not any other transition state. This 
> > > will
> of
> > > course lead to corruption of disk. Does CloudStack has any option 
> > > of
> > volume
> > > locking so that two instances of the same VM wont be running?
> > > Anyone else has faced this issue and found some solution to fix it?
> > >
> > > We are thinking of using "virtlockd" of libvirt or implementing 
> > > custom
> > lock
> > > mechanisms. There are some pros and cons of the both the solutions 
> > > and
> i
> > > want your feedback before proceeding further.
> > >
> > > --
> > > Thanks and regards
> > > Rakesh venkatesh
> > >
> >
> >
> > --
> >
> > Andrija Panić
> >
>
>
> --
> Thanks and regards
> Rakesh venkatesh
>


-- 

Andrija Panić


Re: Mangement Server and hosts in different networks

2020-05-19 Thread Sina Kashipazha
Dear Augosto,

Here is the Cloudstack mailing list, not Openstack. Cloudstack and Openstack 
are doing the same thing but in a completely different way. You can see them as 
Android and iOS. It is better to ask your question in the Openstack mailing 
list.

Kind Regards,
Sina

> On 19 May 2020, at 22:06, i...@defendhosting.com wrote:
> 
> 
> Hello,
> 
> I want to start with openstack with 2 hosts/nodes, from OVH and of
> course 1 management server/master.
> 
> Must be all 3 servers linked with the product that OVH naed as "vlink"?.
> 
> Can I for example have the master in one datacenter (in a hetzner VPS
> for example), one host in OVH USA, and another host in OVH France?
> 
> Thank you in advance for your help!
> 
> Best wishes,
> 
> Augusto
> 
> 
> --
> El software de antivirus Avast ha analizado este correo electrónico en busca 
> de virus.
> https://www.avast.com/antivirus
> 



Re: Mangement Server and hosts in different networks

2020-05-19 Thread info



Sorry, Cloudstack :-)


On 19/05/2020 22:44, Andrija Panic wrote:

If you want to start with OpenStack, then you are on the wrong mailing list
:)

On Tue, 19 May 2020, 22:06 ,  wrote:


Hello,

I want to start with openstack with 2 hosts/nodes, from OVH and of
course 1 management server/master.

Must be all 3 servers linked with the product that OVH naed as "vlink"?.

Can I for example have the master in one datacenter (in a hetzner VPS
for example), one host in OVH USA, and another host in OVH France?

Thank you in advance for your help!

Best wishes,

Augusto


--
El software de antivirus Avast ha analizado este correo electrónico en
busca de virus.
https://www.avast.com/antivirus



--

█ DEFEND HOSTING - www.defendhosting.com - A Company You Can Trust
█ Shared Hosting | VPS | Dedicated Servers | SEO | PBX, CRM and SEO Servers
█ USA/EU Locations | Fast Network/Server Backbone Connection
█ Best Value For Your Money | Outstanding Uptime | Customer Tailored 
Support | Money-Back Guarantee


--
El software de antivirus Avast ha analizado este correo electrónico en busca de 
virus.
https://www.avast.com/antivirus



Re: Mangement Server and hosts in different networks

2020-05-19 Thread Andrija Panic
If you want to start with OpenStack, then you are on the wrong mailing list
:)

On Tue, 19 May 2020, 22:06 ,  wrote:

>
> Hello,
>
> I want to start with openstack with 2 hosts/nodes, from OVH and of
> course 1 management server/master.
>
> Must be all 3 servers linked with the product that OVH naed as "vlink"?.
>
> Can I for example have the master in one datacenter (in a hetzner VPS
> for example), one host in OVH USA, and another host in OVH France?
>
> Thank you in advance for your help!
>
> Best wishes,
>
> Augusto
>
>
> --
> El software de antivirus Avast ha analizado este correo electrónico en
> busca de virus.
> https://www.avast.com/antivirus
>
>


Mangement Server and hosts in different networks

2020-05-19 Thread info



Hello,

I want to start with openstack with 2 hosts/nodes, from OVH and of
course 1 management server/master.

Must be all 3 servers linked with the product that OVH naed as "vlink"?.

Can I for example have the master in one datacenter (in a hetzner VPS
for example), one host in OVH USA, and another host in OVH France?

Thank you in advance for your help!

Best wishes,

Augusto


--
El software de antivirus Avast ha analizado este correo electrónico en busca de 
virus.
https://www.avast.com/antivirus



Re: CloudStack - Ubuntu/KVM (all in one management-server/host) - OS Upgrade w/o updating System VMs

2020-05-19 Thread Andrija Panic
Hi David,

0. Good :) (needless to say, always drop the messed-up DBs, and then import
the backup in, don't just try to import over the existing/messed-up DBs)
1. You are right, you HAVE TO use the **exact** name as it's stated in the
upgrade notes (ACS code base is searching for a template with that name,
otherwise DB upgrade will fail again)
2. Since it's not mentioned, just leave it as it is - it's not relevant
(and i.e. for VMware/KVM it's ticked by default, if not mistaken)

Cheers
Andrija

On Tue, 19 May 2020 at 20:58, David Merrill 
wrote:

> Reporting in, I was able to roll-back the cloudstack packages and reload
> the backup of the cloud database and get the UI going again.
>
> So that's good, but a couple questions on this page (and registering the
> template in the UI):
>
>  -
> http://docs.cloudstack.apache.org/en/latest/upgrading/upgrade/upgrade-4.11.html#update-system-vm-templates
>
> 1. I understand it's important to use the values specified, I assume
> setting the "name" specifically as stated is what helps cloudstack "find"
> the new templates when upgrading the database?
>
> 2. In the UI there's an HVM checkbox (that's defaulted to checked), but
> the documentation (above) doesn't specifically refer to it. So for my
> clarity (hopefully I just need to be remided), leave it checked or
> unchecked? Does it matter in the context of the system VMs?
>
> Thanks!
> David
>
> David Merrill
> Senior Systems Engineer,
> Managed and Private/Hybrid Cloud Services
> OTELCO
> 92 Oak Street, Portland ME 04101
> office 207.772.5678 
> http://www.otelco.com/cloud-and-managed-services
>
> Confidentiality Message
> The information contained in this e-mail transmission may be confidential
> and legally privileged. If you are not the intended recipient, you are
> notified that any dissemination, distribution, copying or other use of this
> information, including attachments, is prohibited. If you received this
> message in error, please call me at 207.772.5678  so
> this error can be corrected.
>
>
> On 5/18/20, 10:22 AM, "David Merrill"  wrote:
>
> OK, got it, digging in...
>
> Thanks everyone,
> David
>
> David Merrill
> Senior Systems Engineer,
> Managed and Private/Hybrid Cloud Services
> OTELCO
> 92 Oak Street, Portland ME 04101
> office 207.772.5678 
> http://www.otelco.com/cloud-and-managed-services
>
> Confidentiality Message
> The information contained in this e-mail transmission may be
> confidential and legally privileged. If you are not the intended recipient,
> you are notified that any dissemination, distribution, copying or other use
> of this information, including attachments, is prohibited. If you received
> this message in error, please call me at 207.772.5678 
> so this error can be corrected.
>
>
> On 5/18/20, 7:44 AM, "Andrija Panic"  wrote:
>
> David,
>
> the procedure you laid out is correct - rollback DB, downgrade, etc
> As Luis mentioned, make sure to also rollback the
> "mysql-connector-java" in
> case it was upgraded (if you left the mysql repo enabled during the
> upgrade).
>
> There are ways to hack the DB, vm_template table and also use the
> "cloud-install-sys-tmplt"... but a clean rollback (as it's very
> easy in
> your test env) is a much better way to proceed.
>
> Cheers
> Andrija
>
> On Mon, 18 May 2020 at 10:06, Richard Lawley <
> rich...@richardlawley.com>
> wrote:
>
> > The database upgrade does not happen unless the systemVM
> templates have
> > been added, so nothing non-reversible has happened yet.  You can
> just use
> > yum to downgrade to 4.11.2 and you'll be fine (we've also
> accidentally done
> > this at some point!).
> >
> > I'd recommend disabling your cloudstack yum repo so that it
> doesn't happen
> > again.
> >
> > On Sun, 17 May 2020 at 21:01, Luis 
> wrote:
> >
> > > Dont forget to downgrade the database connector or it will not
> work
> > >
> > > Sent from Yahoo Mail on Android
> > >
> > >   On Sun, May 17, 2020 at 3:47 PM, David Merrill<
> > david.merr...@otelco.com>
> > > wrote:   Hi All,
> > >
> > > I've got a CloudStack 4.11.2 lab running on a single host &
> made the
> > > (dumb) mistake of running OS updates without updating the
> system VMs
> > > beforehand. The CloudStack packages upgraded to 4.11.3 just
> fine but now
> > > CloudStack management services wont start (see below).
> > >
> > > Here's my question:
> > >
> > >  - Is it *at all* possible to rectify this (now, post package
> updates) by
> > > getting the new system templates in via the CLI tools
> > >
> >
> (/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt)?
> > >  - The 

Re: [VOTE] Apache CloudStack 4.14.0.0 RC3

2020-05-19 Thread Andrija Panic
Hi all,

In my humble opinion, we should release 4.14 as it is (considering we have
enough votes), but we'll further investigate the actual/behind-the-scene
root-cause for the vSphere 6.7 harakiri (considering 6.0 and 6.5 are not
affected) - this is possibly a VMware bug and we'll certainly try to
address it.

If I don't hear any more concerns or -1 votes until tomorrow morning CET
time, I will proceed with concluding the voting process and crafting the
release.

Thanks,
Andrija

On Tue, 19 May 2020 at 19:23, Pavan Kumar Aravapalli <
pavankuma...@accelerite.com> wrote:

> Thank you Bobby and Daan for the update. However I have not encountered
> such issue while doing dev test with Vmware 5.5 & 6.5.
>
>
>
>
>
> Regards,
>
> Pavan Aravapalli.
>
>
> 
> From: Daan Hoogland 
> Sent: 19 May 2020 20:56
> To: users 
> Cc: d...@cloudstack.apache.org 
> Subject: Re: [VOTE] Apache CloudStack 4.14.0.0 RC3
>
> Thanks Bobby,
> All, I've been closely working with Bobby and seen the same things. Does
> anybody see any issues releasing 4.14 based on this code? I can confirm
> that it is not Pavernalli's UEFI PR and we should not create a new PR to
> revert it.
> thanks for all of your patience,
>
> (this is me giving a binding +1)
>
>
> On Tue, May 19, 2020 at 5:04 PM Boris Stoyanov <
> boris.stoya...@shapeblue.com>
> wrote:
>
> > Hi guys,
> >
> > I've done more testing around this and I can now confirm it has nothing
> to
> > do with cloudstack code.
> >
> > I've tested it with rc3, reverted UEFI PR and 4.13.1 (which does not
> > happen to have the feature at all). Also I've used a matrix of VMware
> > version of 6.0u2, 6.5u2 and 6.7u3.
> >
> > The bug is reproducible with all the cloudstack versions, and only vmware
> > 6.7u3, I was not able to reproduce this with 6.5/6.0. All of my results
> > during testing show it must be related to that specific version of
> VMware.
> >
> > Therefore I'm reversing my '-1' and giving a +1 vote on the RC. I think
> it
> > needs to be included in release notes to refrain from that version for
> now
> > until further investigation is done.
> >
> > Thanks,
> > Bobby.
> >
> > On 19.05.20, 10:08, "Boris Stoyanov" 
> > wrote:
> >
> > Indeed it is severe, but please note it's a corner case which was
> > unearthed almost by accident. It falls down to using a new feature of
> > selecting a boot protocol and the template must be corrupted. So with
> > already existing templates I would not expect to encounter it.
> >
> > As for recovery, we've managed to recover vCenter and Cloudstack
> after
> > reboots of the vCenter machine and the Cloudstack management service.
> > There's no exact points to recover for now, but restart seems to work.
> > By graceful failure I mean, cloudstack erroring out the deployment
> and
> > VM finished in ERROR state, meanwhile connection and operability with
> > vCenter cluster remains the same.
> >
> > We're currently exploring options to fix this, one could be to
> disable
> > the feature for VMWare and work to introduce more sustainable fix in next
> > release. Other is to look for more guarding code when installing a
> > template, since VMware doesn’t actually allow you install that particular
> > template but cloudstack does. We'll keep you posted.
> >
> > Thanks,
> > Bobby.
> >
> > On 18.05.20, 23:01, "Marcus"  wrote:
> >
> > The issue sounds severe enough that a release note probably won't
> > suffice -
> > unless there's a documented way to recover we'd never want to
> > leave a
> > system susceptible to being unrecoverable, even if it's rarely
> > triggered.
> >
> > What's involved in "failing gracefully"? Is this a small fix, or
> an
> > overhaul?  Perhaps the new feature could be disabled for VMware,
> or
> > disabled altogether until a fix is made in a patch release.
> >
> > Does it only affect new templates, or is there a risk that an
> > existing
> > template out in vSphere could suddenly cause problems?
> >
> > On Mon, May 18, 2020 at 12:49 AM Boris Stoyanov <
> > boris.stoya...@shapeblue.com> wrote:
> >
> > > Hi guys,
> > >
> > > A little further info on this, it appears when we use a
> > corrupted template
> > > and UEFI/Legacy mode when deploy a VM, it breaks the connection
> > between
> > > cloudstack and vCenter.
> > >
> > > All hosts become unreachable and basically the cluster is not
> > functional,
> > > have not investigated a way to recover this but seems like a
> > huge mess..
> > > Please note that user is not able to register such template in
> > vCenter
> > > directly, but cloudstack allows using it.
> > >
> > > Open to discuss if we'll fix this, since it's expected users to
> > use
> > > working templates, I think we should be failing gracefully and
> > such action
> > > should not be 

Re: CloudStack - Ubuntu/KVM (all in one management-server/host) - OS Upgrade w/o updating System VMs

2020-05-19 Thread David Merrill
Reporting in, I was able to roll-back the cloudstack packages and reload the 
backup of the cloud database and get the UI going again.

So that's good, but a couple questions on this page (and registering the 
template in the UI):

 - 
http://docs.cloudstack.apache.org/en/latest/upgrading/upgrade/upgrade-4.11.html#update-system-vm-templates

1. I understand it's important to use the values specified, I assume setting 
the "name" specifically as stated is what helps cloudstack "find" the new 
templates when upgrading the database?

2. In the UI there's an HVM checkbox (that's defaulted to checked), but the 
documentation (above) doesn't specifically refer to it. So for my clarity 
(hopefully I just need to be remided), leave it checked or unchecked? Does it 
matter in the context of the system VMs?

Thanks!
David

David Merrill
Senior Systems Engineer,
Managed and Private/Hybrid Cloud Services
OTELCO
92 Oak Street, Portland ME 04101
office 207.772.5678 
http://www.otelco.com/cloud-and-managed-services
 
Confidentiality Message
The information contained in this e-mail transmission may be confidential and 
legally privileged. If you are not the intended recipient, you are notified 
that any dissemination, distribution, copying or other use of this information, 
including attachments, is prohibited. If you received this message in error, 
please call me at 207.772.5678  so this error can be 
corrected.
 

On 5/18/20, 10:22 AM, "David Merrill"  wrote:

OK, got it, digging in...

Thanks everyone,
David

David Merrill
Senior Systems Engineer,
Managed and Private/Hybrid Cloud Services
OTELCO
92 Oak Street, Portland ME 04101
office 207.772.5678 
http://www.otelco.com/cloud-and-managed-services
 
Confidentiality Message
The information contained in this e-mail transmission may be confidential 
and legally privileged. If you are not the intended recipient, you are notified 
that any dissemination, distribution, copying or other use of this information, 
including attachments, is prohibited. If you received this message in error, 
please call me at 207.772.5678  so this error can be 
corrected.
 

On 5/18/20, 7:44 AM, "Andrija Panic"  wrote:

David,

the procedure you laid out is correct - rollback DB, downgrade, etc
As Luis mentioned, make sure to also rollback the 
"mysql-connector-java" in
case it was upgraded (if you left the mysql repo enabled during the
upgrade).

There are ways to hack the DB, vm_template table and also use the
"cloud-install-sys-tmplt"... but a clean rollback (as it's very easy in
your test env) is a much better way to proceed.

Cheers
Andrija

On Mon, 18 May 2020 at 10:06, Richard Lawley 
wrote:

> The database upgrade does not happen unless the systemVM templates 
have
> been added, so nothing non-reversible has happened yet.  You can just 
use
> yum to downgrade to 4.11.2 and you'll be fine (we've also 
accidentally done
> this at some point!).
>
> I'd recommend disabling your cloudstack yum repo so that it doesn't 
happen
> again.
>
> On Sun, 17 May 2020 at 21:01, Luis  
wrote:
>
> > Dont forget to downgrade the database connector or it will not work
> >
> > Sent from Yahoo Mail on Android
> >
> >   On Sun, May 17, 2020 at 3:47 PM, David Merrill<
> david.merr...@otelco.com>
> > wrote:   Hi All,
> >
> > I've got a CloudStack 4.11.2 lab running on a single host & made the
> > (dumb) mistake of running OS updates without updating the system VMs
> > beforehand. The CloudStack packages upgraded to 4.11.3 just fine 
but now
> > CloudStack management services wont start (see below).
> >
> > Here's my question:
> >
> >  - Is it *at all* possible to rectify this (now, post package 
updates) by
> > getting the new system templates in via the CLI tools
> >
> 
(/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt)?
> >  - The error on the logs *seems* simple, if management could find 
the new
> > template maybe things could proceed?
> >
> > My impression is no given research I've done so far which seems to 
amount
> > to:
> >
> >  - roll back to 4.11.2
> >  - use the pre-upgrade dump of the DB (which I made)
> >  - start management services
> >  - get the new system VMs properly (like I should have in the first
> place)
> >  - upgrade to 4.11.3
> >
> > Humbly (& slightly embarrassed) yours, and thanks,
> > David
> >
> > David Merrill
> > Senior Systems Engineer,
> > Managed and 

Re: [VOTE] Apache CloudStack 4.14.0.0 RC3

2020-05-19 Thread Pavan Kumar Aravapalli
Thank you Bobby and Daan for the update. However I have not encountered such 
issue while doing dev test with Vmware 5.5 & 6.5.





Regards,

Pavan Aravapalli.



From: Daan Hoogland 
Sent: 19 May 2020 20:56
To: users 
Cc: d...@cloudstack.apache.org 
Subject: Re: [VOTE] Apache CloudStack 4.14.0.0 RC3

Thanks Bobby,
All, I've been closely working with Bobby and seen the same things. Does
anybody see any issues releasing 4.14 based on this code? I can confirm
that it is not Pavernalli's UEFI PR and we should not create a new PR to
revert it.
thanks for all of your patience,

(this is me giving a binding +1)


On Tue, May 19, 2020 at 5:04 PM Boris Stoyanov 
wrote:

> Hi guys,
>
> I've done more testing around this and I can now confirm it has nothing to
> do with cloudstack code.
>
> I've tested it with rc3, reverted UEFI PR and 4.13.1 (which does not
> happen to have the feature at all). Also I've used a matrix of VMware
> version of 6.0u2, 6.5u2 and 6.7u3.
>
> The bug is reproducible with all the cloudstack versions, and only vmware
> 6.7u3, I was not able to reproduce this with 6.5/6.0. All of my results
> during testing show it must be related to that specific version of VMware.
>
> Therefore I'm reversing my '-1' and giving a +1 vote on the RC. I think it
> needs to be included in release notes to refrain from that version for now
> until further investigation is done.
>
> Thanks,
> Bobby.
>
> On 19.05.20, 10:08, "Boris Stoyanov" 
> wrote:
>
> Indeed it is severe, but please note it's a corner case which was
> unearthed almost by accident. It falls down to using a new feature of
> selecting a boot protocol and the template must be corrupted. So with
> already existing templates I would not expect to encounter it.
>
> As for recovery, we've managed to recover vCenter and Cloudstack after
> reboots of the vCenter machine and the Cloudstack management service.
> There's no exact points to recover for now, but restart seems to work.
> By graceful failure I mean, cloudstack erroring out the deployment and
> VM finished in ERROR state, meanwhile connection and operability with
> vCenter cluster remains the same.
>
> We're currently exploring options to fix this, one could be to disable
> the feature for VMWare and work to introduce more sustainable fix in next
> release. Other is to look for more guarding code when installing a
> template, since VMware doesn’t actually allow you install that particular
> template but cloudstack does. We'll keep you posted.
>
> Thanks,
> Bobby.
>
> On 18.05.20, 23:01, "Marcus"  wrote:
>
> The issue sounds severe enough that a release note probably won't
> suffice -
> unless there's a documented way to recover we'd never want to
> leave a
> system susceptible to being unrecoverable, even if it's rarely
> triggered.
>
> What's involved in "failing gracefully"? Is this a small fix, or an
> overhaul?  Perhaps the new feature could be disabled for VMware, or
> disabled altogether until a fix is made in a patch release.
>
> Does it only affect new templates, or is there a risk that an
> existing
> template out in vSphere could suddenly cause problems?
>
> On Mon, May 18, 2020 at 12:49 AM Boris Stoyanov <
> boris.stoya...@shapeblue.com> wrote:
>
> > Hi guys,
> >
> > A little further info on this, it appears when we use a
> corrupted template
> > and UEFI/Legacy mode when deploy a VM, it breaks the connection
> between
> > cloudstack and vCenter.
> >
> > All hosts become unreachable and basically the cluster is not
> functional,
> > have not investigated a way to recover this but seems like a
> huge mess..
> > Please note that user is not able to register such template in
> vCenter
> > directly, but cloudstack allows using it.
> >
> > Open to discuss if we'll fix this, since it's expected users to
> use
> > working templates, I think we should be failing gracefully and
> such action
> > should not be able to create downtime on such a large scale.
> >
> > I believe the boot type feature is new one and it's not
> available in older
> > releases, so this issue should be limited to 4.14/current master.
> >
> > Thanks,
> > Bobby.
> >
> > On 15.05.20, 17:07, "Boris Stoyanov" <
> boris.stoya...@shapeblue.com>
> > wrote:
> >
> > I'll have to -1 RC3, we've discovered details about an issue
> which is
> > causing severe consequences with a particular hypervisor in the
> afternoon.
> > We'll need more time to investigate before disclosing.
> >
> > Bobby.
> >
> > On 15.05.20, 9:12, "Boris Stoyanov" <
> boris.stoya...@shapeblue.com>
> > wrote:
> >
> > +1 (binding)
>   

Re: [VOTE] Apache CloudStack 4.14.0.0 RC3

2020-05-19 Thread Daan Hoogland
Thanks Bobby,
All, I've been closely working with Bobby and seen the same things. Does
anybody see any issues releasing 4.14 based on this code? I can confirm
that it is not Pavernalli's UEFI PR and we should not create a new PR to
revert it.
thanks for all of your patience,

(this is me giving a binding +1)


On Tue, May 19, 2020 at 5:04 PM Boris Stoyanov 
wrote:

> Hi guys,
>
> I've done more testing around this and I can now confirm it has nothing to
> do with cloudstack code.
>
> I've tested it with rc3, reverted UEFI PR and 4.13.1 (which does not
> happen to have the feature at all). Also I've used a matrix of VMware
> version of 6.0u2, 6.5u2 and 6.7u3.
>
> The bug is reproducible with all the cloudstack versions, and only vmware
> 6.7u3, I was not able to reproduce this with 6.5/6.0. All of my results
> during testing show it must be related to that specific version of VMware.
>
> Therefore I'm reversing my '-1' and giving a +1 vote on the RC. I think it
> needs to be included in release notes to refrain from that version for now
> until further investigation is done.
>
> Thanks,
> Bobby.
>
> On 19.05.20, 10:08, "Boris Stoyanov" 
> wrote:
>
> Indeed it is severe, but please note it's a corner case which was
> unearthed almost by accident. It falls down to using a new feature of
> selecting a boot protocol and the template must be corrupted. So with
> already existing templates I would not expect to encounter it.
>
> As for recovery, we've managed to recover vCenter and Cloudstack after
> reboots of the vCenter machine and the Cloudstack management service.
> There's no exact points to recover for now, but restart seems to work.
> By graceful failure I mean, cloudstack erroring out the deployment and
> VM finished in ERROR state, meanwhile connection and operability with
> vCenter cluster remains the same.
>
> We're currently exploring options to fix this, one could be to disable
> the feature for VMWare and work to introduce more sustainable fix in next
> release. Other is to look for more guarding code when installing a
> template, since VMware doesn’t actually allow you install that particular
> template but cloudstack does. We'll keep you posted.
>
> Thanks,
> Bobby.
>
> On 18.05.20, 23:01, "Marcus"  wrote:
>
> The issue sounds severe enough that a release note probably won't
> suffice -
> unless there's a documented way to recover we'd never want to
> leave a
> system susceptible to being unrecoverable, even if it's rarely
> triggered.
>
> What's involved in "failing gracefully"? Is this a small fix, or an
> overhaul?  Perhaps the new feature could be disabled for VMware, or
> disabled altogether until a fix is made in a patch release.
>
> Does it only affect new templates, or is there a risk that an
> existing
> template out in vSphere could suddenly cause problems?
>
> On Mon, May 18, 2020 at 12:49 AM Boris Stoyanov <
> boris.stoya...@shapeblue.com> wrote:
>
> > Hi guys,
> >
> > A little further info on this, it appears when we use a
> corrupted template
> > and UEFI/Legacy mode when deploy a VM, it breaks the connection
> between
> > cloudstack and vCenter.
> >
> > All hosts become unreachable and basically the cluster is not
> functional,
> > have not investigated a way to recover this but seems like a
> huge mess..
> > Please note that user is not able to register such template in
> vCenter
> > directly, but cloudstack allows using it.
> >
> > Open to discuss if we'll fix this, since it's expected users to
> use
> > working templates, I think we should be failing gracefully and
> such action
> > should not be able to create downtime on such a large scale.
> >
> > I believe the boot type feature is new one and it's not
> available in older
> > releases, so this issue should be limited to 4.14/current master.
> >
> > Thanks,
> > Bobby.
> >
> > On 15.05.20, 17:07, "Boris Stoyanov" <
> boris.stoya...@shapeblue.com>
> > wrote:
> >
> > I'll have to -1 RC3, we've discovered details about an issue
> which is
> > causing severe consequences with a particular hypervisor in the
> afternoon.
> > We'll need more time to investigate before disclosing.
> >
> > Bobby.
> >
> > On 15.05.20, 9:12, "Boris Stoyanov" <
> boris.stoya...@shapeblue.com>
> > wrote:
> >
> > +1 (binding)
> >
> > I've executed upgrade tests with the following
> configurations:
> >
> > 4.13.1 with KVM on CentOS7 hosts
> > 4.13 with VMware6.5 hosts
> > 4.11.3 with KVM on CentOS7 hosts
> > 4.11.2 with XenServer7 hosts
> > 4.11.1 with 

Re: [VOTE] Apache CloudStack 4.14.0.0 RC3

2020-05-19 Thread Boris Stoyanov
Hi guys,

I've done more testing around this and I can now confirm it has nothing to do 
with cloudstack code. 

I've tested it with rc3, reverted UEFI PR and 4.13.1 (which does not happen to 
have the feature at all). Also I've used a matrix of VMware version of 6.0u2, 
6.5u2 and 6.7u3. 

The bug is reproducible with all the cloudstack versions, and only vmware 
6.7u3, I was not able to reproduce this with 6.5/6.0. All of my results during 
testing show it must be related to that specific version of VMware. 

Therefore I'm reversing my '-1' and giving a +1 vote on the RC. I think it 
needs to be included in release notes to refrain from that version for now 
until further investigation is done. 

Thanks,
Bobby.

On 19.05.20, 10:08, "Boris Stoyanov"  wrote:

Indeed it is severe, but please note it's a corner case which was unearthed 
almost by accident. It falls down to using a new feature of selecting a boot 
protocol and the template must be corrupted. So with already existing templates 
I would not expect to encounter it. 

As for recovery, we've managed to recover vCenter and Cloudstack after 
reboots of the vCenter machine and the Cloudstack management service. There's 
no exact points to recover for now, but restart seems to work. 
By graceful failure I mean, cloudstack erroring out the deployment and VM 
finished in ERROR state, meanwhile connection and operability with vCenter 
cluster remains the same. 

We're currently exploring options to fix this, one could be to disable the 
feature for VMWare and work to introduce more sustainable fix in next release. 
Other is to look for more guarding code when installing a template, since 
VMware doesn’t actually allow you install that particular template but 
cloudstack does. We'll keep you posted. 

Thanks,
Bobby.

On 18.05.20, 23:01, "Marcus"  wrote:

The issue sounds severe enough that a release note probably won't 
suffice -
unless there's a documented way to recover we'd never want to leave a
system susceptible to being unrecoverable, even if it's rarely 
triggered.

What's involved in "failing gracefully"? Is this a small fix, or an
overhaul?  Perhaps the new feature could be disabled for VMware, or
disabled altogether until a fix is made in a patch release.

Does it only affect new templates, or is there a risk that an existing
template out in vSphere could suddenly cause problems?

On Mon, May 18, 2020 at 12:49 AM Boris Stoyanov <
boris.stoya...@shapeblue.com> wrote:

> Hi guys,
>
> A little further info on this, it appears when we use a corrupted 
template
> and UEFI/Legacy mode when deploy a VM, it breaks the connection 
between
> cloudstack and vCenter.
>
> All hosts become unreachable and basically the cluster is not 
functional,
> have not investigated a way to recover this but seems like a huge 
mess..
> Please note that user is not able to register such template in vCenter
> directly, but cloudstack allows using it.
>
> Open to discuss if we'll fix this, since it's expected users to use
> working templates, I think we should be failing gracefully and such 
action
> should not be able to create downtime on such a large scale.
>
> I believe the boot type feature is new one and it's not available in 
older
> releases, so this issue should be limited to 4.14/current master.
>
> Thanks,
> Bobby.
>
> On 15.05.20, 17:07, "Boris Stoyanov" 
> wrote:
>
> I'll have to -1 RC3, we've discovered details about an issue 
which is
> causing severe consequences with a particular hypervisor in the 
afternoon.
> We'll need more time to investigate before disclosing.
>
> Bobby.
>
> On 15.05.20, 9:12, "Boris Stoyanov" 
> wrote:
>
> +1 (binding)
>
> I've executed upgrade tests with the following configurations:
>
> 4.13.1 with KVM on CentOS7 hosts
> 4.13 with VMware6.5 hosts
> 4.11.3 with KVM on CentOS7 hosts
> 4.11.2 with XenServer7 hosts
> 4.11.1 with VMware 6.7
> 4.9.3 with XenServer 7 hosts
> 4.9.2 with KVM on CentOS 7 hosts
>
> Also I've run basic lifecycle operations on the following
> components:
> VMs
> Volumes
> Infra (zones, pod, clusters, hosts)
> Networks
> and more
>
> I did not come across any problems during this testing.
>
> Thanks,
> Bobby.
>
>
> On 

Re: [VOTE] Apache CloudStack 4.14.0.0 RC3

2020-05-19 Thread Boris Stoyanov
Indeed it is severe, but please note it's a corner case which was unearthed 
almost by accident. It falls down to using a new feature of selecting a boot 
protocol and the template must be corrupted. So with already existing templates 
I would not expect to encounter it. 

As for recovery, we've managed to recover vCenter and Cloudstack after reboots 
of the vCenter machine and the Cloudstack management service. There's no exact 
points to recover for now, but restart seems to work. 
By graceful failure I mean, cloudstack erroring out the deployment and VM 
finished in ERROR state, meanwhile connection and operability with vCenter 
cluster remains the same. 

We're currently exploring options to fix this, one could be to disable the 
feature for VMWare and work to introduce more sustainable fix in next release. 
Other is to look for more guarding code when installing a template, since 
VMware doesn’t actually allow you install that particular template but 
cloudstack does. We'll keep you posted. 

Thanks,
Bobby.

On 18.05.20, 23:01, "Marcus"  wrote:

The issue sounds severe enough that a release note probably won't suffice -
unless there's a documented way to recover we'd never want to leave a
system susceptible to being unrecoverable, even if it's rarely triggered.

What's involved in "failing gracefully"? Is this a small fix, or an
overhaul?  Perhaps the new feature could be disabled for VMware, or
disabled altogether until a fix is made in a patch release.

Does it only affect new templates, or is there a risk that an existing
template out in vSphere could suddenly cause problems?

On Mon, May 18, 2020 at 12:49 AM Boris Stoyanov <
boris.stoya...@shapeblue.com> wrote:

> Hi guys,
>
> A little further info on this, it appears when we use a corrupted template
> and UEFI/Legacy mode when deploy a VM, it breaks the connection between
> cloudstack and vCenter.
>
> All hosts become unreachable and basically the cluster is not functional,
> have not investigated a way to recover this but seems like a huge mess..
> Please note that user is not able to register such template in vCenter
> directly, but cloudstack allows using it.
>
> Open to discuss if we'll fix this, since it's expected users to use
> working templates, I think we should be failing gracefully and such action
> should not be able to create downtime on such a large scale.
>
> I believe the boot type feature is new one and it's not available in older
> releases, so this issue should be limited to 4.14/current master.
>
> Thanks,
> Bobby.
>
> On 15.05.20, 17:07, "Boris Stoyanov" 
> wrote:
>
> I'll have to -1 RC3, we've discovered details about an issue which is
> causing severe consequences with a particular hypervisor in the afternoon.
> We'll need more time to investigate before disclosing.
>
> Bobby.
>
> On 15.05.20, 9:12, "Boris Stoyanov" 
> wrote:
>
> +1 (binding)
>
> I've executed upgrade tests with the following configurations:
>
> 4.13.1 with KVM on CentOS7 hosts
> 4.13 with VMware6.5 hosts
> 4.11.3 with KVM on CentOS7 hosts
> 4.11.2 with XenServer7 hosts
> 4.11.1 with VMware 6.7
> 4.9.3 with XenServer 7 hosts
> 4.9.2 with KVM on CentOS 7 hosts
>
> Also I've run basic lifecycle operations on the following
> components:
> VMs
> Volumes
> Infra (zones, pod, clusters, hosts)
> Networks
> and more
>
> I did not come across any problems during this testing.
>
> Thanks,
> Bobby.
>
>
> On 11.05.20, 18:21, "Andrija Panic" 
> wrote:
>
> Hi All,
>
> I've created a 4.14.0.0 release (RC3), with the following
> artefacts up for
> testing and a vote:
>
> Git Branch and Commit SH:
>
> 
https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.14.0.0-RC20200511T1503
> Commit: 6f96b3b2b391a9b7d085f76bcafa3989d9832b4e
>
> Source release (checksums and signatures are available at the
> same
> location):
> https://dist.apache.org/repos/dist/dev/cloudstack/4.14.0.0/
>
> PGP release keys (signed using 3DC01AE8):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
>
> The vote will be open until 14th May 2020, 17.00 CET (72h).
>
> For sanity in tallying the vote, can PMC members please be
> sure to indicate
> "(binding)" with their vote?
>
> [ ] +1 approve
> [ ] +0 no opinion
>