>>>On 8/6/2009 at 8:21 PM, Bernie Wu wrote:
> Thanks Yan Gao for the reply. We're using heartbeat 2.1.3-0.9
running under
> zVM 5.4 / SLES10-SP2. So I guess I have to use cibadmin. So here
goes:
>
> 1. # cibadmin -Q | grep node
> value="2.1.3-node: a3184d5240c6e7032aef9
Hi,
my Cluster is running fine but my logfile is being flooded by these messages:
lha-snmpagent[5252]: 2009/08/07_09:38:37 info: unpack_rsc_op:
tomcat38-www2test_monitor_0 on www2test returned 0 (ok) instead of the
expected value: 7 (not running)
lha-snmpagent[5252]: 2009/08/07_09:38:37 notice:
Alain.Moulle wrote:
> Hello Andrew,
> Could you explain why this functionnality is no more available
> (configuration
> lines remain in ha.cf) ?
ipfail was replaced by pingd in v2. That was in the very first version
of v2 afaik.
> And how should we proceed to avoid split-brain cases in a two-nod
>>>On 8/6/2009 at 8:46 PM, Bernie Wu wrote:
> Hi Listers,
> We are running heartbeat 2.1.3-0.9 under zVM 5.4 / SLES10-SP2.
> We have 2 test clusters, one with 3 nodes and the other with 2 nodes.
> How do I prevent the nodes from one cluster showing up in the other
cluster
> and vice vers
Am Freitag, 7. August 2009 10:28:52 schrieb Yan Gao:
> >>>On 8/6/2009 at 8:46 PM, Bernie Wu wrote:
> >
> > Hi Listers,
> > We are running heartbeat 2.1.3-0.9 under zVM 5.4 / SLES10-SP2.
> > We have 2 test clusters, one with 3 nodes and the other with 2 nodes.
> >
> > How do I prevent the nodes f
Hi,
ok but do you agree that in case of heartbeat network problem, there will
be a "race to stonith" from all nodes in the cluster and so the risk that
both nodes will be killed is not zero ?
That's why I thought that a ping towards an equipment out of the cluster
should reduce the risk of split b
Hi,
looks like the apache on my systems does not like the command:
sh -c wget -O- -q -L --bind-address=127.0.0.1 http://*:80/server-status | tr
'\012' ' ' | grep -Ei"[[:space:]]*" >/dev/null
especialy the "http://*:80/server-status"; part won´t create any request-entries
in the access-log of th
Hi!
Is there any tool, that can be used to retrieve machine readable
cluster status? crm_mon -s -1 doesn't show resource state. I've also
tried parsing 'cibadmin --query' output, but it only gives me
information about node states, while i'm intersted in resource states
too.
--
Am Freitag, 7. August 2009 15:30:31 schrieb Denis Chapligin:
> Hi!
>
> Is there any tool, that can be used to retrieve machine readable
> cluster status? crm_mon -s -1 doesn't show resource state. I've also
> tried parsing 'cibadmin --query' output, but it only gives me
> information about node sta