[ClusterLabs] fence_sanlock and pacemaker

2015-08-26 Thread Laurent B.
Gents, I'm trying to configure a HA cluster with RHEL 6.5. Everything goes well except the fencing. The cluster's node are not connected to the management lan (where stand all the iLO/UPS/APC devices) and it's not planned to connecting them to this lan. With these constraints, I figured out that

[ClusterLabs] multiple drives looks like balancing but why and causing troubles

2015-08-26 Thread Streeter, Michelle N
I have a two node cluster. Both nodes are virtual and have five shared drives attached via sas controller. For some reason, the cluster shows both nodes have half the drives started on them. Not sure if this is called split brain or not. It definitely looks load balancing. But I did not

Re: [ClusterLabs] multiple drives looks like balancing but why and causing troubles

2015-08-26 Thread Digimer
On 26/08/15 02:46 PM, Streeter, Michelle N wrote: I have a two node cluster. Both nodes are virtual and have five shared drives attached via sas controller. For some reason, the cluster shows both nodes have half the drives started on them. Not sure if this is called split brain or not.

Re: [ClusterLabs] fence_sanlock and pacemaker

2015-08-26 Thread Andrew Beekhof
On 27 Aug 2015, at 4:11 am, Laurent B. laure...@qmail.re wrote: Gents, I'm trying to configure a HA cluster with RHEL 6.5. Everything goes well except the fencing. The cluster's node are not connected to the management lan (where stand all the iLO/UPS/APC devices) and it's not planned

[ClusterLabs] resource-stickiness

2015-08-26 Thread Rakovec Jost
Hi list, I have configure simple cluster on sles 11 sp4 and have a problem with auto_failover off. The problem is that when ever I migrate resource group via HAWK my configuration change from: location cli-prefer-aapche aapche role=Started 10: sles2 to: location cli-ban-aapche-on-sles1

Re: [ClusterLabs] resource-stickiness

2015-08-26 Thread Rakovec Jost
Sorry one typo: problem is the same location cli-prefer-aapche aapche role=Started 10: sles2 to: location cli-prefer-aapche aapche role=Started inf: sles2 It keep change to infinity. my configuration is: node sles1 node sles2 primitive filesystem Filesystem \ params

[ClusterLabs] Antw: NFS exports

2015-08-26 Thread Ulrich Windl
Streeter, Michelle N michelle.n.stree...@boeing.com schrieb am 26.08.2015 um 15:42 in Nachricht 9a18847a77a9a14da7e0fd240efcafc2504...@xch-phx-501.sw.nos.boeing.com: I have been using linux /etc/exports to put my exports for my cluster and it works fine this way as long as every node has