Hi Li,
We are using Ansible to build and manage capacity of our cloudstack
environments end to end. From setting up zones, projects, networks,
compute offering, pods, clusters and hosts.
Make sure to use current Ansible 2.8 or newer. See
https://docs.ansible.com/ansible/latest/modules/list_of_
nd such are used by the
> hosts.
>
> I think Pierre-Luc and Syed have a clearer picture of all the moving
> pieces, but that is a quick summary of what I know without digging in.
>
> Hope that helps.
>
> Cheers,
>
> Will
>
> On Tue, Jul 16, 2019, 10:24 PM Jean-Fra
Hello Everyone,
I was wondering if it was common or even recommended to use an S3
compatible storage system as the only secondary storage provider ?
The environment is 4.11.3.0 with KVM (Centos 7.6), and our tier1 storage
solution also provides an S3 compatible object store (apparently Minio
und
Good morning all,
I have this strange configDrive problem that shows up on 4.11.2 and only if
upgraded from 4.9.
This problems shows up with the DefaultL2NetworkOfferingConfigDriveVlan
network offering
In lab and a fresh 4.11.2.0 install, the configDrive enabled network will
present an ISO to g
Hi all,
Im kicking the tires with managed storage with under 4.11.2 with KVM and
Datera as primary storage.
My first attempt at creating a VM from a template stored on NFS secondary
failed silently. Looking at the SSVM cloud logs I saw no exception. The VM
root disks gets properly created on the
Paul, Is it true to say that parameter can only be enabled if you have NFS
primary storage enabled in the zone ?Or is there any chance this works
with managed storage ?
best,
Jfn
On Tue, Feb 12, 2019 at 4:29 AM Paul Angus wrote:
> Great to hear that, thanks for letting us know!
>
> paul.a
We did a quick test with HyperV 2016 under 4.9.3 and some APIs changed in
hyperv we believed prevented us to deploy a zone correctly. We did not
investigate further.
On Tue, Nov 20, 2018 at 6:18 AM Andrija Panic
wrote:
> Hi all,
>
> anyone has experience with running Hyper-V with CloudStack, wh
Just by the look of the "Protocol family unavailable" error can you
disable IPv6 in the JVm startup script ?
On Mon, Nov 5, 2018 at 4:37 AM li li wrote:
> Hi ALL
>
>
> I'm trying to encapsulate cloudstack 4.11 into docker. After build is
> successful, cloudstack-management cannot function p
-1
Only because we believe this issue is a regression upgrading from 4.9.3.
Existing network offerings created under 4.9.3 should continue to work when
creating new networks under 4.11.2. Please see
https://github.com/apache/cloudstack/issues/2989
best,
Jfn
On Mon, Nov 5, 2018 at 5:04 AM Boris
wrote:
> Hi Eric & Jean-Francois,
> Thanks for your work in testing.
> There is an open vote, could you now (and in future) respond to the
> thread, the official vote will/would pass as it stands. (I only caught this
> through doing a final sweep of the mailing lists).
>
>
Test was on 4.11.2rc3. will send to dev list
I all,
I was wondering if anyone else had this problem after upgrading from 4.9.
All our networks are using a custom network offering with no services
defined since the physical network provides DHCP and DNS. Environment is
CentOS 7, KVM with the openvswitch driver.
Now after the upgrade to 4.
t a lot more
> attention than Jira.
>
>
> - Si
>
>
> ____
> From: Jean-Francois Nadeau
> Sent: Tuesday, October 23, 2018 11:32 AM
> To: users@cloudstack.apache.org
> Subject: Re: Host HA vs transient NFS problems on KVM
>
> I will fil
working automatic HA, I
> > agree, but it is far better to be woken up at 3am to deal with
> restarting a
> > handful of vms and perhaps a KVM host force reboot than dealing with mass
> > KVM hosts reboots and/or trying to find duplicate vms lurking somewhere
> on
> > the
Dear community,
I want to share my concern upgrading from 4.9 to 4.11 in regards to how the
host HA framework works and the handling of various failure conditions.
Since we have been running CS on 4.9.3 with NFS on KVM, VM HA have been
working as expected when hypervisor crashed and I agree
late seems find but it may be possible that an older
> systemvm.iso has patched the systemvm which is why you're seeing the error.
>
>
> - Rohit
>
> <https://cloudstack.apache.org>
>
>
>
> ____
> From: Jean-Francois Nade
Same on the systemvmtemplate-4.11.0-kvm.qcow2 image I guess I don't
understand how the template gets customized and why it doesn't work for us.
On Fri, Oct 19, 2018 at 11:09 AM Jean-Francois Nadeau <
the.jfnad...@gmail.com> wrote:
> So at first I did not upgrade the age
gt;
>
> - Rohit
>
> <https://cloudstack.apache.org>
>
>
>
> ____
> From: Jean-Francois Nadeau
> Sent: Friday, October 19, 2018 2:13:16 AM
> To: users@cloudstack.apache.org
> Subject: New SSVM wont start after upgrade from 4.9.3 to 4.11.2rc3
>
>
hi all,
After upgrading from 4.9.3 to 4.11.2rc3 on centos7/KVM, the old SSMV were
running and working fine until I destroyed them to get them on the current
version (I uploaded the 4.11.2rc3 template before the upgrade)
Now whatever I do there's nothing running on the new Console proxy VM
Oct 1
If the xentools are installed and running in the guest OS it should detect
the shutdown sent via XAPI.
On Wed, Jun 6, 2018 at 6:58 PM, Yiping Zhang wrote:
> We are using XenServers with our CloudStack instances.
>
> On 6/6/18, 3:11 PM, "Jean-Francois Nadeau"
> wrote:
On KVM, AFAIK the shutdown is the equivalent of pressing the power
button. To get the Linux OS to catch this and initiate a clean shutdown,
you need the ACPID service running in the guest OS.
On Wed, Jun 6, 2018 at 6:01 PM, Yiping Zhang wrote:
> Hi, all:
>
> We have a few VM instances which wi
.@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London WC2N 4HSUK
> > @shapeblue
> >
> >
> >
> >> On 23 Jan 2018, at 16:23, Jean-Francois Nadeau
> wrote:
> >>
> >> Thank you both Boris and Nux!
> >>
e are
> doing testing for 4.11.
>
> HTH
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro<http://www.nux.ro>
>
>
> boris.stoya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London WC2N 4HSUK
&g
+1 On the versions of ACS in the matrix. I.e. it sounds like today most
production setup runs 4.9 or before and until 4.11 is GA and stabilizes it
sounds like 4.9 is the only good option for a go live today. Knowing how
long 4.9 would be supported is key.
On Wed, Jan 17, 2018 at 9:50 AM, Ron Wh
Hi all,
I'm testing 4.11-rc1 and the new L2 network type feature as shown at
http://www.shapeblue.com/layer-2-networks-in-cloudstack/
I want to use this as a replacement to a shared network offering with no
DHCP which works to support an external DHCP server but still required to
fill some CIDR i
.
On Sat, Dec 23, 2017 at 10:14 AM, Jean-Francois Nadeau <
the.jfnad...@gmail.com> wrote:
> Clearly the management server doesn't realize the instance on the failed
> host is not running... but the host is in Alert state and powered down,
> and missing NFS heartbeats.
>
&g
r:ctx-66fbe484) (logid:1f53cd63) Found 0 VM, not running on
host 4
Next step ?
On Sat, Dec 23, 2017 at 9:49 AM, Jean-Francois Nadeau <
the.jfnad...@gmail.com> wrote:
> I'd really like to get at the bottom of this.It does sound like the
> behavior mentioned in https://issues.apa
I'd really like to get at the bottom of this.It does sound like the
behavior mentioned in https://issues.apache.org/jira/browse/CLOUDSTACK-5582
but should be long fixed.
One suspect log entry (be unrelated) I noticed is this recurring exception
in the manger logs :
ERROR [c.c.v.UserVmManagerI
Good morning,
New to ACS and doing a POC with 4.10 on Centos 7 and KVM.
Im trying to recover VMs after an host failure (powered off from OOB).
Primary storage is NFS and IPMI is configured for the KVM hosts. Zone is
advanced mode with vlan separation and created a shared network with no
service
29 matches
Mail list logo