Hi Parth,
Two answer your questions, VM-HA does not restart VMs on an alternate host if
the original host goes down. The management server (without host-HA) cannot
tell what happened to the host. It cannot tell if there was a failure in the
agent, loss of connectivity to the management NIC or
Hi Paul
My testing does indeed end up with the failed host in maintenance mode but the
VMs are never migrated. As I posted earlier the management server seems to be
saying there is no other host that the VM can be migrated to.
Couple of questions if you have the time to respond -
1) this ar
Hello Guys,
I think it is related
==
https://github.com/apache/cloudstack/pull/2474
===
On 03/14/2018 02:05 PM, Jon Marshall wrote:
Hi Paul
My testing does indeed end up with the failed host in maintenance mode but the
VMs are never migrated. As I posted earlier the managem
I'd need to do some testing, but I suspect that your problem is that you only
have two hosts. At the point that one host is deemed out of service, you only
have one host left. With only one host, CloudStack will show the cluster as
ineligible.
It is extremely common for any system working as
That would make sense.
I have another server being used for something else at the moment so I will add
that in and update this thread when I have tested
Jon
From: Paul Angus
Sent: 14 March 2018 09:16
To: [email protected]
Subject: RE: KVM HostHA
I
Hi Paul,
sorry to bump in the middle of the thread, but just curious about the idea
behing host-HA and why it behaves the way you exlained above:
Would it be more sense (or not?), that when MGMT detects agents is
unreachable or host unreachable (or after unsuccessful i.e. agent restart,
etc...,t
Hi Andrija,
There’s two types of checks Host-HA is doing to determine if host if healthy.
1. Health checks - pings the host as soon as there’s connection issues with the
agent
If that fails,
2. Activity checks - checks if there are any writing operations on the Disks of
the VMs that are run
Hi Boris,
ok thanks for the explanation - that makes sense, and covers my "exception
case" that I have.
This is atm only available for NFS as I could read (KVM on NFS) ?
Cheers
On 14 March 2018 at 13:02, Boris Stoyanov
wrote:
> Hi Andrija,
>
> There’s two types of checks Host-HA is doing to d
yes, KVM + NFS shared storage.
Boris.
[email protected]
www.shapeblue.com
53 Chandos Place, Covent Garden, London WC2N 4HSUK
@shapeblue
> On 14 Mar 2018, at 14:51, Andrija Panic wrote:
>
> Hi Boris,
>
> ok thanks for the explanation - that makes sense, and covers my "exce
Hi Paul and Adrina,
I don't know the functioning of Host-HA features but what Paul explained,
my ACS 4.11 does the same without even host HA or ipmi access. As I stated
earlier multiple times, without host HA and ipmi, my ha-enabled VMs
executing on a normal host get restarted on another suitable
Hello Rafael,
I'm aware of it, thank you. I also assumed that there could be some problem
with it, that's why I shared a link (second one) in my first post, hopping that
someone could confirm me that assumption.
After I have set ca.plugin.root.auth.strictness to false everything worked just
fi
Hi,
Can I change/edit a Zone’s Public IP Range?
Ex:
my conf
start 200.0.0.1 end 200.0.0.20 netmask 255.255.240.0
desired conf
start 200.0.0.1 end 200.0.10.254 netmask 255.255.240.0
I need more Public IPs but on initial config I set only 0.1 to 0.20 range.
Thanks
Matheus Fontes
Hey Matheus,
You can edit your range of IP's by going into infrastructure->pod->edit.
There change the end ip to whatever you want. It will not allow, if your
guest ip range overlaps with your management ip range being on the basic
network. So make sure you take care of that.
Regards
Swastik
On
KVM HostHA was developed to bring it to a closer parity with how VMware
vSphere handles its HA.
Basically in old model - there were several corner cases where CloudStack
did not know whether
Hypervisor crashed or just lost connectivity to Management server.
We’ve added a logic to make sure that h
14 matches
Mail list logo