Hello
On Wed, Aug 1, 2012 at 11:04 PM, kailash kumawat <
kailash.kuma...@rudrainfotainment.com> wrote:
> Hi
>
> I have been successfully configure the redhat cluster for apache now i
> want to change password of my master and other node so how can i
> update in luci server, please help me
>
>
>
L
Am 02.08.2012 17:02, schrieb Digimer:
On 08/02/2012 08:42 AM, Heiko Nardmann wrote:
Am 02.08.2012 14:19, schrieb AKIN ÿffd6ZTOPUZ:
Hi
I have fencing problem in 2 nodes cluster )
fence device agent islike that :
when I run fence_node nodenamecommand on host , Related n
On 08/02/2012 08:42 AM, Heiko Nardmann wrote:
Am 02.08.2012 14:19, schrieb AKIN ÿffd6ZTOPUZ:
Hi
I have fencing problem in 2 nodes cluster )
fence device agent islike that :
when I run fence_node nodenamecommand on host , Related node
goes to down but I am taking erro
On 08/02/2012 08:19 AM, AKIN ÿffd6ZTOPUZ wrote:
Hi
I have fencing problem in 2 nodes cluster )
fence device agent islike that :
when I run fence_node nodenamecommand on host , Related node
goes to down but I am taking errors in /var/log/messages :
Aug 2 14:55:31 sa
Am 02.08.2012 15:35, schrieb AKIN ÿffd6ZTOPUZ:
This agent is implemented using ILO port on HP servers.
how can I use debug mode ?
thnks
Uhh ... some misunderstanding. What I meant has been which programming
language has been used to implement this agent ... Python, Shell, ...
I am not
On Thu, 2 Aug 2012 16:12:24 +0200 emmanuel segura wrote:
> can you show me your lvm.conf?
Here it is.
Gianluca
lvm.conf
Description: Binary data
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Yup, I missed the part where you said you only have a single node.
To be clear, the portion of the docs you site below is exactly why you need
to be careful about how many votes you give to the qdiskd. It should be a
tie breaker. You are using it to bring up a 3 node cluster in which only a
single
can you show me your lvm.conf?
2012/8/2 Gianluca Cecchi
> On Thu, 2 Aug 2012 07:07:25 -0600 Corey Kovacs wrte:
> > I might be reading this wrong but just in case, I thought I'd point this
> out.
> >
> [snip]
> > A single node can maintain quorum since 2+3>(9/2).
> > In a split brain condition wh
On Thu, 2 Aug 2012 07:07:25 -0600 Corey Kovacs wrte:
> I might be reading this wrong but just in case, I thought I'd point this out.
>
[snip]
> A single node can maintain quorum since 2+3>(9/2).
> In a split brain condition where a single node cannot talk to the other
> nodes, this could be disast
This agent is implemented using ILO port on HP servers.
how can I use debug mode ?
thnks
From: Heiko Nardmann
To: linux-cluster@redhat.com
Sent: Thursday, August 2, 2012 3:42 PM
Subject: Re: [Linux-cluster] fencing issue in 2 nodes cluster
Am 02.08.20
I might be reading this wrong but just in case, I thought I'd point this
out.
Your quorum config.
node=2 votes*3 (nodes have 6 votes total)
qdisk=3 votes.
A single node can maintain quorum since 2+3>(9/2).
In a split brain condition where a single node cannot talk to the other
nodes, this could
Am 02.08.2012 14:19, schrieb AKIN ÿffd6ZTOPUZ:
Hi
I have fencing problem in 2 nodes cluster two_node="1"/> )
fence device agent islike that :
login="clsfenceadmin" method="cycle" name="fence_node2"
passwd="**" power_wait="4"/>
when I run fence_node nodenamecomma
Hi
I have fencing problem in 2 nodes cluster )
fence device agent is like that :
when I run fence_node nodename command on host , Related node goes to
down but I am taking errors in /var/log/messages :
Aug 2 14:55:31 sapclsn2 fenced[6714]: fencing node "sapcls
if you think the problem it's in lvm, put it in the debug man lvm.conf
2012/8/2 Gianluca Cecchi
> On Wed, Aug 1, 2012 at 6:15 PM, Gianluca Cecchi wrote:
> > On Wed, 1 Aug 2012 16:26:38 +0200 emmanuel segura wrote:
> >> Why you don't remove expected_votes=3 and let the cluster automatic
> calcula
On Thu, 02 Aug 2012 09:39:34 +0200 Heiko Nardmann wrote:
> If that is a real production system and not just for playing you should setup
> a test environment before and also create a plan which usecases should run
> with the new cluster.
+1 for sure for what Heiko recommended
Plus, first place:
On Wed, Aug 1, 2012 at 6:15 PM, Gianluca Cecchi wrote:
> On Wed, 1 Aug 2012 16:26:38 +0200 emmanuel segura wrote:
>> Why you don't remove expected_votes=3 and let the cluster automatic
>> calculate that
>
> Thanks for your answer Emmanuel, but cman starts correctly, while the
> problem seems relat
Am 02.08.2012 08:52, schrieb Ralf Aumueller:
Hello,
we have a 2-node Cluster which is still running on CentOS 6.2. I would like to
make an update to 6.3. Planed procedure:
- migrate services from node 1 to node 2
- stop cluster on node 1
- update + reboot node 1
- migrate services from node 2 t
Hi,
On Wed, 1 Aug 2012 22:54:49 +0800 (SGT), Zama Ques
wrote:
>
> =
> Cluster Name: ClusterA
>
> Node1: system1.example.com Priority:1 in Failover Domain
> Node2: system2.example.com Priority:2 in Failover Domain
>
> File System Resource : /data1 - An ext3 file system
>
> =
>
>
Hello,
we have a 2-node Cluster which is still running on CentOS 6.2. I would like to
make an update to 6.3. Planed procedure:
- migrate services from node 1 to node 2
- stop cluster on node 1
- update + reboot node 1
- migrate services from node 2 to node 1
- stop cluster on node 2
- update + re
19 matches
Mail list logo