I don’t have access to the cluster anymore. I built 4 instances of this and had
a few issues along the way to investigate. I don’t recall getting any messages
like yours. Sorry I cant help. You should send your cluster.conf.
Bevan Broun
Solutions Architect
Ardec International
http://www.ardec.c
2009/1/8 Brett Delle Grazie :
> Hi,
>
> I have a configured two-node cluster with some GFS file systems on them.
>
> Those servers also run http servers and I'd like to load-share the HTTP
> servers without
> putting a hardware load balancer in front of them.
>
> I read about clusterIP: http://www.
> -Original Message-
> From: linux-cluster-boun...@redhat.com
> [mailto:linux-cluster-boun...@redhat.com] On Behalf Of
> Chrissie Caulfield
> Sent: Monday, January 12, 2009 9:00 AM
> To: linux clustering
> Subject: Re: [Linux-cluster] Strange CMAN error
>
> > It's as though cman is conca
Jeff Sturm wrote:
> What might cause a message like:
>
> Jan 12 08:41:24 t0core-mqc02 openais[1716]: [CMAN ] Node 8 conflict,
> remote cluster name='t0core-inner-rhcxvm', local='t0core-inner-rhc'
>
> I've double- and triple-checked that /etc/cluster/cluster.conf is
> identical on every node. It
What might cause a message like:
Jan 12 08:41:24 t0core-mqc02 openais[1716]: [CMAN ] Node 8 conflict,
remote cluster name='t0core-inner-rhcxvm', local='t0core-inner-rhc'
I've double- and triple-checked that /etc/cluster/cluster.conf is
identical on every node. It starts with:
It'
Hi,
By trial and error, I could write the queries. We need to follow the Node
structure of the cluster.config file to fetch the result. For the specific
attribute we need to mention '@' before the attribute name.
However, I would still like to know whether we can relate the results in a
way simil
Yes, I suspect the problem is that the node is 'bouncing' as it joins
the cluster.
Causes of this are usually to do with either a) startup scripts (eg some
Xen ones) taking he interface down and then up after openais has started
or b) "intelligent" switches taking too long to recognise the multica
Hi,
Thanks for your response Marc. It seems that we are the only ones facing
this
problem ... ?
I saw in changelog a fix :
- A dirty node is now prevented from joining the cman cluster.
It could be related to our problem ... because when launching cman
on the second node, the node is labeled as
Hi All,
I am new to the RHEL Clusters. Is there any way, (other than the
cluster.conf file) using which we can view / list all the Cluster Resources
that are used under the Cluster Service (Resource Group)? Some command which
might give some output as -
Service Name = Service1
Resources -
IP Add