Re: [VOTE] Apache CloudStack 4.14.0.0 RC3

2020-05-20 Thread Riepl, Gregor (SWISS TXT)
Hi everyone

Sorry for the late response, but I have a few concerns:


  *   As Bobby stated, this bug seems to only occur with VMware 6.7+, and it 
sounds to me like they should take action on it. Does someone track this with 
VMware?
  *   Do I understand correctly that the issue only occurs when the image is 
set to UEFI mode, but the VM is configured as Legacy Boot in CloudStack? How 
would this combination even work? I think CloudStack should either reject such 
a mismatch or autocorrect it. Or at least display a warning to the user.
  *   If the bug can break vCenter (if only temporarily), there should 
definitely some sort of safeguard around it, even if it isn't a proper fix or 
workaround.

Regards,
Gregor

From: Andrija Panic 
Sent: 19 May 2020 21:11
To: users 
Cc: d...@cloudstack.apache.org 
Subject: Re: [VOTE] Apache CloudStack 4.14.0.0 RC3

Hi all,

In my humble opinion, we should release 4.14 as it is (considering we have
enough votes), but we'll further investigate the actual/behind-the-scene
root-cause for the vSphere 6.7 harakiri (considering 6.0 and 6.5 are not
affected) - this is possibly a VMware bug and we'll certainly try to
address it.

If I don't hear any more concerns or -1 votes until tomorrow morning CET
time, I will proceed with concluding the voting process and crafting the
release.

Thanks,
Andrija

On Tue, 19 May 2020 at 19:23, Pavan Kumar Aravapalli <
pavankuma...@accelerite.com> wrote:

> Thank you Bobby and Daan for the update. However I have not encountered
> such issue while doing dev test with Vmware 5.5 & 6.5.
>
>
>
>
>
> Regards,
>
> Pavan Aravapalli.
>
>
> 
> From: Daan Hoogland 
> Sent: 19 May 2020 20:56
> To: users 
> Cc: d...@cloudstack.apache.org 
> Subject: Re: [VOTE] Apache CloudStack 4.14.0.0 RC3
>
> Thanks Bobby,
> All, I've been closely working with Bobby and seen the same things. Does
> anybody see any issues releasing 4.14 based on this code? I can confirm
> that it is not Pavernalli's UEFI PR and we should not create a new PR to
> revert it.
> thanks for all of your patience,
>
> (this is me giving a binding +1)
>
>
> On Tue, May 19, 2020 at 5:04 PM Boris Stoyanov <
> boris.stoya...@shapeblue.com>
> wrote:
>
> > Hi guys,
> >
> > I've done more testing around this and I can now confirm it has nothing
> to
> > do with cloudstack code.
> >
> > I've tested it with rc3, reverted UEFI PR and 4.13.1 (which does not
> > happen to have the feature at all). Also I've used a matrix of VMware
> > version of 6.0u2, 6.5u2 and 6.7u3.
> >
> > The bug is reproducible with all the cloudstack versions, and only vmware
> > 6.7u3, I was not able to reproduce this with 6.5/6.0. All of my results
> > during testing show it must be related to that specific version of
> VMware.
> >
> > Therefore I'm reversing my '-1' and giving a +1 vote on the RC. I think
> it
> > needs to be included in release notes to refrain from that version for
> now
> > until further investigation is done.
> >
> > Thanks,
> > Bobby.
> >
> > On 19.05.20, 10:08, "Boris Stoyanov" 
> > wrote:
> >
> > Indeed it is severe, but please note it's a corner case which was
> > unearthed almost by accident. It falls down to using a new feature of
> > selecting a boot protocol and the template must be corrupted. So with
> > already existing templates I would not expect to encounter it.
> >
> > As for recovery, we've managed to recover vCenter and Cloudstack
> after
> > reboots of the vCenter machine and the Cloudstack management service.
> > There's no exact points to recover for now, but restart seems to work.
> > By graceful failure I mean, cloudstack erroring out the deployment
> and
> > VM finished in ERROR state, meanwhile connection and operability with
> > vCenter cluster remains the same.
> >
> > We're currently exploring options to fix this, one could be to
> disable
> > the feature for VMWare and work to introduce more sustainable fix in next
> > release. Other is to look for more guarding code when installing a
> > template, since VMware doesn’t actually allow you install that particular
> > template but cloudstack does. We'll keep you posted.
> >
> > Thanks,
> > Bobby.
> >
> > On 18.05.20, 23:01, "Marcus"  wrote:
> >
> > The issue sounds severe enough that a release note probably won't
> > suffice -
> > unless there's a documented way to recover we'd never want to
> > leave a
> > system susceptible to being unrecoverable, even if it's rarely
> > triggered.
> >
> > What's involved in "failing gracefully"? Is this a small fix, or
> an
> > overhaul?  Perhaps the new feature could be disabled for VMware,
> or
> > disabled altogether until a fix is made in a patch release.
> >
> > Does it only affect new templates, or is there a risk that an
> > existing
> > template out in vSphere could suddenly cause problems?
> >
> > On Mon, May 18, 2020 at 12:49 

Re: Testing CS 4.13.1

2020-05-20 Thread Andrija Panic
You can't attach ANYTHING (screenshot included) to the ML
Please upload to pastebin or such and share the link.

chees

On Wed, 20 May 2020 at 01:28, Luis Martinez 
wrote:

> Hi, did you get the log file I attached? any advice?
>
> Thank you
>
> On 5/18/2020 4:59 PM, Sergey Levitskiy wrote:
> > Plz share full management-server.log since your snippet doesn't have
> relevant lines.
> >
> > Thanks,
> > Sergey
> >
> >
> > On 5/18/20, 1:25 PM, "Luis Martinez" 
> wrote:
> >
> >  This is the error I see in the logs, I am trying to ssh to the VM
> for
> >  secondary storage but i am not able to do it.
> >
> >
> >  2020-05-18 16:23:26,778 DEBUG [c.c.c.ConsoleProxyManagerImpl]
> >  (consoleproxy-1:ctx-195a645d) (logid:e873b8f5) Zone 1 is ready to
> launch
> >  console proxy
> >  2020-05-18 16:23:29,820 DEBUG [c.c.s.StatsCollector]
> >  (StatsCollector-2:ctx-8b9729d8) (logid:d7872342) AutoScaling
> Monitor is
> >  running...
> >  2020-05-18 16:23:30,138 DEBUG [c.c.s.StatsCollector]
> >  (StatsCollector-5:ctx-a1b22863) (logid:bd79141a) HostStatsCollector
> is
> >  running...
> >  2020-05-18 16:23:30,195 DEBUG [c.c.a.t.Request]
> >  (StatsCollector-5:ctx-a1b22863) (logid:bd79141a) Seq
> >  1-1544453197211369507: Received:  { Ans: , MgmtId: 266682308174733,
> via:
> >  1(cloudstackvm1), Ver: v1, Flags: 10, { GetHostStatsAnswer } }
> >  2020-05-18 16:23:30,522 DEBUG [c.c.s.StatsCollector]
> >  (StatsCollector-3:ctx-ef56a510) (logid:1a0c1267) StorageCollector is
> >  running...
> >  2020-05-18 16:23:30,530 DEBUG [c.c.s.StatsCollector]
> >  (StatsCollector-3:ctx-ef56a510) (logid:1a0c1267) *There is no
> secondary
> >  storage VM for secondary storage host nfs://
> 10.0.8.10/mnt/CS01/secondary*
> >  2020-05-18 16:23:30,533 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
> >  (StatsCollector-3:ctx-ef56a510) (logid:1a0c1267)
> >  getCommandHostDelegation: class
> com.cloud.agent.api.GetStorageStatsCommand
> >  2020-05-18 16:23:30,533 DEBUG [c.c.h.XenServerGuru]
> >  (StatsCollector-3:ctx-ef56a510) (logid:1a0c1267) We are returning
> the
> >  default host to execute commands because the command is not of Copy
> type.
> >  2020-05-18 16:23:30,612 DEBUG [c.c.a.t.Request]
> >  (StatsCollector-3:ctx-ef56a510) (logid:1a0c1267) Seq
> >  1-1544453197211369508: Received:  { Ans: , MgmtId: 266682308174733,
> via:
> >  1(cloudstackvm1), Ver: v1, Flags: 10, { GetStorageStatsAnswer } }
> >  2020-05-18 16:23:33,611 DEBUG [c.c.a.m.AgentManagerImpl]
> >  (AgentManager-Handler-3:null) (logid:) Ping from Routing host
> >  1(cloudstackvm1)
> >
> >  On 5/18/2020 2:43 PM, Rohit Yadav wrote:
> >  > Hi Luis,
> >  >
> >  > Please use the correct systemvmtemplate version/link for
> CloudStack 4.13.1.0.
> >  >
> >  > Refer to the install/upgrade docs such as
> http://docs.cloudstack.apache.org/en/4.13.1.0/upgrading/upgrade/upgrade-4.12.html
> >  >
> >  > Regards.
> >  >
> >  > Regards,
> >  > Rohit Yadav
> >  >
> >  > 
> >  > From: Luis Martinez 
> >  > Sent: Monday, May 18, 2020 11:07:20 PM
> >  > To: users@cloudstack.apache.org 
> >  > Subject: Testing CS 4.13.1
> >  >
> >  > Hi Group
> >  >
> >  > I need help, I am installing 4.13.1 for testing, installation is
> fine
> >  > but secondary storage is not working, I tried to ssh to the VM
> and I am
> >  > unable to do it, I used the following lines in different
> installations
> >  > to see if this fixes the problem but no. how can I troubleshoot
> this? or
> >  > am I using the wrong version?
> >  >
> >  >
> >  >
> /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt
> >  > -m /mnt/secondary -u
> >  >
> http://download.cloudstack.org/systemvm/4.11/systemvmtemplate-4.11.3-kvm.qcow2.bz2
> >  > -h kvm -F
> >  >
> /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt
> >  > -m /mnt/secondary -u
> >  >
> http://download.cloudstack.org/systemvm/4.11/systemvmtemplate-4.11.1-kvm.qcow2.bz2
> >  > -h kvm -F
> >  >
> /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt
> >  > -m /mnt/secondary -u
> >  >
> http://download.cloudstack.org/systemvm/4.11/systemvmtemplate-4.11.0-kvm.qcow2.bz2
> >  > -h kvm -F
> >  >
> >  > Thank you in Advance.
> >  >
> >  >
> >  > rohit.ya...@shapeblue.com
> >  > www.shapeblue.com
> >  > 3 London Bridge Street,  3rd floor, News Building, London  SE1
> 9SGUK
> >  > @shapeblue
> >  >
> >  >
> >  >
> >
>


-- 

Andrija Panić


Re: VirtIO Network Adapter for system vms on KVM Hypervisor

2020-05-20 Thread Andrija Panic
This is most probably due to the Qemu versions (all Debian's should be with
VirtIO)

Best,

On Wed, 20 May 2020 at 00:55, Sean Lair  wrote:

> Just for feedback, we are 4.11.3 and run KVM on CentOS 7.  Our 4.11.3
> template is set to Debian GNU/Linux 8 (64-bit).  Our lspci is shown below:
>
> root@r-281-VM:~# lspci
> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
> 02)
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
> II]
> 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton
> II] (rev 01)
> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
> 00:02.0 VGA compatible controller: Cirrus Logic GD 5446
> 00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
> 00:04.0 Communication controller: Red Hat, Inc Virtio console
> 00:05.0 SCSI storage controller: Red Hat, Inc Virtio block device
> 00:06.0 System peripheral: Intel Corporation 6300ESB Watchdog Timer
> 00:07.0 Ethernet controller: Red Hat, Inc Virtio network device
> 00:08.0 Ethernet controller: Red Hat, Inc Virtio network device
> 00:09.0 Ethernet controller: Red Hat, Inc Virtio network device
>
>
> -Original Message-
> From: Andrija Panic 
> Sent: Saturday, May 16, 2020 7:12 AM
> To: users 
> Subject: Re: VirtIO Network Adapter for system vms on KVM Hypervisor
>
> Thanks for the feedback on that one, Rafal.
>
> Regards
>
> On Sat, 16 May 2020 at 12:58, Rafal Turkiewicz  wrote:
>
> > Just for a record
> >
> > I have tested this with Debian GNU/Linux 7.0 (64-bit) OS Type and it
> > also worked. It basically breaks as soon as I pick Debian GNU/Linux 8
> (64-bit).
> >
> > Thanks
> >
> > On 2020/05/15 14:00:53, Rafal Turkiewicz  wrote:
> > > Andrija,
> > >
> > > You are the man! I have changed the OS Type to the default Debian 5
> > > x64
> > and boom! All sorted.
> > >
> > > It's really odd that picking older OS Type solved the issue where in
> > fact the systemVM is running Debian 9. Is this a BUG of some sort?
> > >
> > > I might try and experiment with other OS Type Debian version X to
> > > see
> > where it falls but for now I'm all happy!
> > >
> > > Once again thank you very much for the pointer!
> > >
> > > Raf
> > >
> > > On 2020/05/15 13:51:01, Andrija Panic  wrote:
> > > > In the upgrade guide, we always advise (when registering the new
> > systeVM
> > > > template) to go as:
> > > >
> > > >   OS Type: Debian GNU/Linux 7.0 (64-bit) (or the highest
> > > > Debian
> > release
> > > > number available in the dropdown)
> > > >
> > > > That being said, in the clean 4.13 installation, the OS type is
> > > > set to Debian 5 x64 - so try each version and in between destroy VR
> (i.e.
> > restart
> > > > the network with cleanup) and observe "lspci" if virtio or intel
> > > > NICs
> > - but
> > > > also make sure that each time the VR is created on KVM host (i.e.
> > > > not
> > on
> > > > XEN).
> > > >
> > > > In order to change OS type for systemVM template, you will have to
> > > > use
> > DB
> > > > - modify the "vm_template" table - update the "guest_os_id" field
> > value for
> > > > that specific template, to the ID from the "guest_os" table where
> > > > name=Debian XXX 64.
> > > >
> > > > Hope that solves the issue - should by all means.
> > > >
> > > > Regards
> > > > Andrija
> > > >
> > > >
> > > > On Fri, 15 May 2020 at 15:33, Rafal Turkiewicz 
> > wrote:
> > > >
> > > > > Hello Andrija,
> > > > >
> > > > > Thanks for your input the OS Type for the systemVM template is
> > > > > set to "Debian GNU/Linux 8 (64-bit)"
> > > > >
> > > > > I think I forgot to mention a very important aspect of my setup.
> > > > > This Cloudstack instance is powering XenServer and KVM where KVM
> > > > > was added recently.
> > > > >
> > > > > Your message made me think and look at my other (test lab) setup
> > where
> > > > > CloudStack is only powering KVM hypervisors. I can confirm all
> > > > > VRs
> > are
> > > > > running with virtio which implies there got to be something on
> > > > > the
> > my mixed
> > > > > HV CloudStack.
> > > > >
> > > > > I will keep looking into this but if you have any further
> > > > > thoughts
> > on this
> > > > > please let me know.
> > > > >
> > > > > Raf
> > > > >
> > > > > On 2020/05/15 11:14:37, Andrija Panic 
> > wrote:
> > > > > > Rafal,
> > > > > >
> > > > > > what is the OS type you defined for the systemVM template?
> > > > > >
> > > > > > In my env, VR (VPC) - all interfaces are VirtIO.
> > > > > >
> > > > > > Best
> > > > > > Andrija
> > > > > >
> > > > > > On Fri, 15 May 2020 at 12:14, Rafal Turkiewicz
> > > > > > 
> > > > > wrote:
> > > > > >
> > > > > > > Platform:
> > > > > > > CloudStack 4.11.2 on CentOS 7 KVM Hypervisor on CentOS 7
> > > > > > >
> > > > > > > I have found some throughput issues on our VirtualRuters and
> > > > > > > I've
> > > > > tracked
> > > > > > > it down to CPU IRQ hitting 99% on the VR which was relat

Re: [VOTE] Apache CloudStack 4.14.0.0 RC3

2020-05-20 Thread Marcus
I would say, if it is proven that this happens with existing released
CloudStack versions, with or without the UEFI feature, against a specific
VMware release with a specific broken template, then it becomes an
environment issue and shouldn't block the release.  In this case it would
not matter if we tried to revert the feature, or if we did or did not
release 4.14, the users who would hit this would be hitting this now in
live environments, with the released versions of CloudStack.

To be clear, I'm not 100% certain this is exactly what Bobby was saying,
but if this is the case then I think it should not block us.

On Wed, May 20, 2020 at 1:00 AM Riepl, Gregor (SWISS TXT) <
gregor.ri...@swisstxt.ch> wrote:

> Hi everyone
>
> Sorry for the late response, but I have a few concerns:
>
>
>   *   As Bobby stated, this bug seems to only occur with VMware 6.7+, and
> it sounds to me like they should take action on it. Does someone track this
> with VMware?
>   *   Do I understand correctly that the issue only occurs when the image
> is set to UEFI mode, but the VM is configured as Legacy Boot in CloudStack?
> How would this combination even work? I think CloudStack should either
> reject such a mismatch or autocorrect it. Or at least display a warning to
> the user.
>   *   If the bug can break vCenter (if only temporarily), there should
> definitely some sort of safeguard around it, even if it isn't a proper fix
> or workaround.
>
> Regards,
> Gregor
> 
> From: Andrija Panic 
> Sent: 19 May 2020 21:11
> To: users 
> Cc: d...@cloudstack.apache.org 
> Subject: Re: [VOTE] Apache CloudStack 4.14.0.0 RC3
>
> Hi all,
>
> In my humble opinion, we should release 4.14 as it is (considering we have
> enough votes), but we'll further investigate the actual/behind-the-scene
> root-cause for the vSphere 6.7 harakiri (considering 6.0 and 6.5 are not
> affected) - this is possibly a VMware bug and we'll certainly try to
> address it.
>
> If I don't hear any more concerns or -1 votes until tomorrow morning CET
> time, I will proceed with concluding the voting process and crafting the
> release.
>
> Thanks,
> Andrija
>
> On Tue, 19 May 2020 at 19:23, Pavan Kumar Aravapalli <
> pavankuma...@accelerite.com> wrote:
>
> > Thank you Bobby and Daan for the update. However I have not encountered
> > such issue while doing dev test with Vmware 5.5 & 6.5.
> >
> >
> >
> >
> >
> > Regards,
> >
> > Pavan Aravapalli.
> >
> >
> > 
> > From: Daan Hoogland 
> > Sent: 19 May 2020 20:56
> > To: users 
> > Cc: d...@cloudstack.apache.org 
> > Subject: Re: [VOTE] Apache CloudStack 4.14.0.0 RC3
> >
> > Thanks Bobby,
> > All, I've been closely working with Bobby and seen the same things. Does
> > anybody see any issues releasing 4.14 based on this code? I can confirm
> > that it is not Pavernalli's UEFI PR and we should not create a new PR to
> > revert it.
> > thanks for all of your patience,
> >
> > (this is me giving a binding +1)
> >
> >
> > On Tue, May 19, 2020 at 5:04 PM Boris Stoyanov <
> > boris.stoya...@shapeblue.com>
> > wrote:
> >
> > > Hi guys,
> > >
> > > I've done more testing around this and I can now confirm it has nothing
> > to
> > > do with cloudstack code.
> > >
> > > I've tested it with rc3, reverted UEFI PR and 4.13.1 (which does not
> > > happen to have the feature at all). Also I've used a matrix of VMware
> > > version of 6.0u2, 6.5u2 and 6.7u3.
> > >
> > > The bug is reproducible with all the cloudstack versions, and only
> vmware
> > > 6.7u3, I was not able to reproduce this with 6.5/6.0. All of my results
> > > during testing show it must be related to that specific version of
> > VMware.
> > >
> > > Therefore I'm reversing my '-1' and giving a +1 vote on the RC. I think
> > it
> > > needs to be included in release notes to refrain from that version for
> > now
> > > until further investigation is done.
> > >
> > > Thanks,
> > > Bobby.
> > >
> > > On 19.05.20, 10:08, "Boris Stoyanov" 
> > > wrote:
> > >
> > > Indeed it is severe, but please note it's a corner case which was
> > > unearthed almost by accident. It falls down to using a new feature of
> > > selecting a boot protocol and the template must be corrupted. So with
> > > already existing templates I would not expect to encounter it.
> > >
> > > As for recovery, we've managed to recover vCenter and Cloudstack
> > after
> > > reboots of the vCenter machine and the Cloudstack management service.
> > > There's no exact points to recover for now, but restart seems to work.
> > > By graceful failure I mean, cloudstack erroring out the deployment
> > and
> > > VM finished in ERROR state, meanwhile connection and operability with
> > > vCenter cluster remains the same.
> > >
> > > We're currently exploring options to fix this, one could be to
> > disable
> > > the feature for VMWare and work to introduce more sustainable fix in
> next
> > > release. Other is to look for more guarding code when

Re: [VOTE] Apache CloudStack 4.14.0.0 RC3

2020-05-20 Thread Andrija Panic
@gregor - the legacy should be fine with UEFI (what I had run on some of my
laptops); UEFI is not a problem, happens with 4.13 also, any VirtualBox OVA
file will cause the issue

###
To conclude the ISSUE, based on my few hour testing today:

- happens when you deliberately use VirtualBox OVA template with vSphere
(who and why would do that, is another topic..), in ACS 4.13.x and
4.14/master

...out of which...:

- does NOT happen with vCenter 6.0 and 6.5 (confirmed by Daan/Bobby),
proper OVF parsing takes place and an error message is generated in ACS logs
- NOT tested:   6.7 / 6.7 U1xxx / 6.7 U2xxx (i.e. not tested with any
variant < 6.7 U3)
- issues happen with vCenter 6.7 U3  / U3a / U3b / U3f - these were
explicitly tested by me and some vCenter services would crash (though still
appearing as running) - the problem is solved by restarting (most?)
services - namely "VMware afd Service" will trigger other services to
restart (dependency) and after a while vCenter is UP again (I could not
find which exact service (single one) might be the issue)
- Worth mentioning this was observed on vCenter on Windows Server, not the
VCSA appliance

-  seems FINE - NO ISSUES with vCenter 6.7 U3g (the latest 6.7 U3 variants
at the moment - build 16046470 from 28.04.2020) and the VM deployment fails
gracefully with a proper error message of not being able to create SPEC
file based on the (bad) OVF.


Since the issue is solved in the (current) latest vSphere 6.7 U3g variant,
I will make sure to have the proper warning message on both 4.13.1 and
4.14.0.0 Release notes documentation (4.13 is when we started supporting
vSphere 6.7 and the same issue present here)

I'll proceed tomorrow with releasing 4.14 based on the voting done so far.

Thanks

On Wed, 20 May 2020 at 22:09, Marcus  wrote:

> I would say, if it is proven that this happens with existing released
> CloudStack versions, with or without the UEFI feature, against a specific
> VMware release with a specific broken template, then it becomes an
> environment issue and shouldn't block the release.  In this case it would
> not matter if we tried to revert the feature, or if we did or did not
> release 4.14, the users who would hit this would be hitting this now in
> live environments, with the released versions of CloudStack.
>
> To be clear, I'm not 100% certain this is exactly what Bobby was saying,
> but if this is the case then I think it should not block us.
>
> On Wed, May 20, 2020 at 1:00 AM Riepl, Gregor (SWISS TXT) <
> gregor.ri...@swisstxt.ch> wrote:
>
> > Hi everyone
> >
> > Sorry for the late response, but I have a few concerns:
> >
> >
> >   *   As Bobby stated, this bug seems to only occur with VMware 6.7+, and
> > it sounds to me like they should take action on it. Does someone track
> this
> > with VMware?
> >   *   Do I understand correctly that the issue only occurs when the image
> > is set to UEFI mode, but the VM is configured as Legacy Boot in
> CloudStack?
> > How would this combination even work? I think CloudStack should either
> > reject such a mismatch or autocorrect it. Or at least display a warning
> to
> > the user.
> >   *   If the bug can break vCenter (if only temporarily), there should
> > definitely some sort of safeguard around it, even if it isn't a proper
> fix
> > or workaround.
> >
> > Regards,
> > Gregor
> > 
> > From: Andrija Panic 
> > Sent: 19 May 2020 21:11
> > To: users 
> > Cc: d...@cloudstack.apache.org 
> > Subject: Re: [VOTE] Apache CloudStack 4.14.0.0 RC3
> >
> > Hi all,
> >
> > In my humble opinion, we should release 4.14 as it is (considering we
> have
> > enough votes), but we'll further investigate the actual/behind-the-scene
> > root-cause for the vSphere 6.7 harakiri (considering 6.0 and 6.5 are not
> > affected) - this is possibly a VMware bug and we'll certainly try to
> > address it.
> >
> > If I don't hear any more concerns or -1 votes until tomorrow morning CET
> > time, I will proceed with concluding the voting process and crafting the
> > release.
> >
> > Thanks,
> > Andrija
> >
> > On Tue, 19 May 2020 at 19:23, Pavan Kumar Aravapalli <
> > pavankuma...@accelerite.com> wrote:
> >
> > > Thank you Bobby and Daan for the update. However I have not encountered
> > > such issue while doing dev test with Vmware 5.5 & 6.5.
> > >
> > >
> > >
> > >
> > >
> > > Regards,
> > >
> > > Pavan Aravapalli.
> > >
> > >
> > > 
> > > From: Daan Hoogland 
> > > Sent: 19 May 2020 20:56
> > > To: users 
> > > Cc: d...@cloudstack.apache.org 
> > > Subject: Re: [VOTE] Apache CloudStack 4.14.0.0 RC3
> > >
> > > Thanks Bobby,
> > > All, I've been closely working with Bobby and seen the same things.
> Does
> > > anybody see any issues releasing 4.14 based on this code? I can confirm
> > > that it is not Pavernalli's UEFI PR and we should not create a new PR
> to
> > > revert it.