Hi. If the node losts nfs host it reboots (acs agent behaviour). If you
really have 3 storages, you'll go clusterwide reboot everytime your host is
down.

28 окт. 2017 г. 3:02 пользователь "Simon Weller" <swel...@ena.com.invalid>
написал:

> Hi James,
>
>
> Can you elaborate a bit further on the storage? You say you're running NFS
> on all 3 nodes, can you explain how it is setup?
>
> Also, what version of ACS are you running?
>
>
> - Si
>
>
>
>
> ________________________________
> From: McClune, James <mcclu...@norwalktruckers.net>
> Sent: Friday, October 27, 2017 2:21 PM
> To: users@cloudstack.apache.org
> Subject: Problems with KVM HA & STONITH
>
> Hello Apache CloudStack Community,
>
> My setup consists of the following:
>
> - Three nodes (NODE1, NODE2, and NODE3)
> NODE1 is running Ubuntu 16.04.3, NODE2 is running Ubuntu 16.04.3, and NODE3
> is running Ubuntu 14.04.5.
> - Management Server (running on separate VM, not in cluster)
>
> The three nodes use KVM as the hypervisor. I also configured primary and
> secondary storage on all three of the nodes. I'm using NFS for the primary
> & secondary storage. VM operations work great. Live migration works great.
>
> However, when a host goes down, the HA functionality does not work at all.
> Instead of spinning up the VM on another available host, the down host
> seems to trigger STONITH. When STONITH happens, all hosts in the cluster go
> down. This not only causes no HA, but also downs perfectly good VM's. I
> have read countless articles and documentation related to this issue. I
> still cannot find a viable solution for this issue. I really want to use
> Apache CloudStack, but cannot implement this in production when STONITH
> happens.
>
> I think I have something misconfigured. I thought I would reach out to the
> CloudStack community and ask for some friendly assistance.
>
> If there is anything (system-wise) you request in order to further
> troubleshoot this issue, please let me know and I'll send. I appreciate any
> help in this issue!
>
> --
>
> Thanks,
>
> James
>

Reply via email to