> this will happen even if no VMs on the host set to “HA”?
Yes. To verify if the hosts that were rebooted were indeed due to missing 
heartbeat from primary, look for " heartbeat" in /var/log/messages.

> What would then be the procedure to perform maintenance on the (first?) 
> primary NFS storage server
In order to perform maintenance on a primary storage, you should enable 
maintenance on it (can be done from GUI/API).

> That sill would not explain why the “dedicated” hosts didn’t reboot
Were these hosts connected to that NFS primary storage hosted on mgmt server?

> how would this work if primary storage were eg iSCSI?
I believe we perform the heartbeat check and host fencing for NFS storage only.

> Is there no way to disable that, except for modifying kvmheartbeat.sh?
AFAIK, there isn't.

Regards,
Somesh


-----Original Message-----
From: Frank Louwers [mailto:fr...@openminds.be] 
Sent: Wednesday, August 19, 2015 2:48 PM
To: users@cloudstack.apache.org
Subject: RE: CS Manager down: all hypervisors reboot

On 19 Aug 2015 at 20:44:47, Somesh Naidu (somesh.na...@citrix.com) wrote:
Management server down would not result in host being rebooted. Primary storage 
down will. 

As you mentioned, you have hosted your primary storage (NFS) on the management 
server node. So yes, taking it down will cause all host connected to it to 
reboot. It doesn’t matter how many VMs use that particular storage. 

I am sure if there is a better way of doing this but you could modify 
"kvmheartbeat.sh" to disable reboot on loosing primary storage connection. 
HI Somesh,

Thanks for the explanation!

Am I right that (after reading your mail and 
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201508.mbox/%3ccalfpzo5cotx0qz+d_oxezjgytau+fa+mzxg_yqeuzswi_9g...@mail.gmail.com%3e
 by Marcus) that this will happen even if no VMs on the host set to “HA”?

What would then be the procedure to perform maintenance on the (first?) primary 
NFS storage server and how would this work if primary storage were eg iSCSI?

That sill would not explain why the “dedicated” hosts didn’t reboot, but I 
assume I should take a look at kvmheartbeat.sh then. Is there no way to disable 
that, except for modifying kvmheartbeat.sh?



Regards,



Frank



Regards, 
Somesh 

-----Original Message----- 
From: Frank Louwers [mailto:fr...@openminds.be] 
Sent: Wednesday, August 19, 2015 12:19 PM 
To: users@cloudstack.apache.org 
Subject: CS Manager down: all hypervisors reboot 

Hi all, 

We had an interesting outage this morning. We took the Cloudstack Manager node 
down for hardware upgrades and kernel updates, and it seems all “non-dedicated” 
hosts rebooted. 

We run KVM on CS 4.4.latest. 

Is this “normal behaviour”, why does it do that, and how do I disable that? 

The Manager is also a primary storage provider (NFS export), but all VMs use 
local storage (except 1). 


Regards, 

Frank 

Reply via email to