Hi Jakov,
I got your points , thanks a lot
Regards
Prathyush
On Wed, Apr 3, 2013 at 11:16 PM, Jakov Sosic wrote:
> On 03/22/2013 02:42 AM, Prathyush wrote:
>> HI,
>>
>>
>> I have 3 node cluster with 1 vote each ,and decided to shut down the 2
>> nodes for maintenance .
>> When a proper pow
On 03/22/2013 02:42 AM, Prathyush wrote:
> HI,
>
>
> I have 3 node cluster with 1 vote each ,and decided to shut down the 2
> nodes for maintenance .
> When a proper power off /init 0 given to two of the nodes ,i never loses
> the quorum
> When i am making force power off (switching off the
Share your configuration, please.
On 03/31/2013 02:51 AM, Prathyush wrote:
Any Idea ?
On Sat, Mar 23, 2013 at 3:41 PM, Prathyush mailto:prathyus...@gmail.com>> wrote:
I have the below scenario,
I have made the first node down the service failover to second
node,then I switched off
Any Idea ?
On Sat, Mar 23, 2013 at 3:41 PM, Prathyush wrote:
> I have the below scenario,
> I have made the first node down the service failover to second node,then I
> switched off second node ,service again swiched to third node,the quorum
> was not lost,it adjusted automatically
> On Mar 22,
I have the below scenario,
I have made the first node down the service failover to second node,then I
switched off second node ,service again swiched to third node,the quorum
was not lost,it adjusted automatically
On Mar 22, 2013 9:11 PM, "Digimer" wrote:
> This is correct.
>
> Quorum needs "50%
This is correct.
Quorum needs "50% + 1" of the votes. So in Prathyush's cluster of 3
nodes/3 votes;
3 / 2 == 1.5 rounded up is 2.
So once the second node shut down, quorum was lost. With quorum lost,
the remaining node will stop all of it's clustered services because it
can no longer be sur
Hello
If i understand well, you 3 node(3 total votes) and you poweroff 2, 3 - 2 =
1(quorum loss) remain less 50% of votes
can you show your config and your log, without it's so hard help you
Thanks
2013/3/22 Prathyush
> HI,
>
>
> I have 3 node cluster with 1 vote each ,and decided to shut d
HI,
I have 3 node cluster with 1 vote each ,and decided to shut down the 2
nodes for maintenance .
When a proper power off /init 0 given to two of the nodes ,i never loses
the quorum
When i am making force power off (switching off the power/pulling
the power cable) quorum was lost and servic
e a speciel heuristic to leave quorum device but it's not
> working:
>
>
> An idea?
>
> --
> .`'`. GouNiNi
> : ': :
> `. ` .` GNU/Linux
>`'`http://www.geekarea.fr
>
>
> - Mail original -
> > De: "emmanuel segura"
&g
GouNiNi
: ': :
`. ` .` GNU/Linux
`'`http://www.geekarea.fr
- Mail original -
> De: "emmanuel segura"
> À: "linux clustering"
> Envoyé: Mardi 7 Août 2012 14:31:13
> Objet: Re: [Linux-cluster] Quorum device brain the cluster when maste
;linux clustering"
> > Envoyé: Mardi 7 Août 2012 11:29:59
> > Objet: Re: [Linux-cluster] Quorum device brain the cluster when master
> lose network
> >
> >
> > do you reboot all nodes in your cluster after removed the
> > expected_votes?
> >
Yes I do ;)
--
.`'`. GouNiNi
: ': :
`. ` .` GNU/Linux
`'`http://www.geekarea.fr
- Mail original -
> De: "emmanuel segura"
> À: "linux clustering"
> Envoyé: Mardi 7 Août 2012 11:29:59
> Objet: Re: [Linux-clust
; Regards,
>
> --
> .`'`. GouNiNi
> : ': :
> `. ` .` GNU/Linux
>`'`http://www.geekarea.fr
>
>
> - Mail original -
> > De: "emmanuel segura"
> > À: "linux clustering"
> > Envoyé: Mercred
nal -
> De: "emmanuel segura"
> À: "linux clustering"
> Envoyé: Mercredi 1 Août 2012 10:58:59
> Objet: Re: [Linux-cluster] Quorum device brain the cluster when master
> lose network
>
>
> Hello Gounini
>
> Sorry but it told you, remove
original -
> > De: "emmanuel segura"
> > À: "linux clustering"
> > Envoyé: Lundi 30 Juillet 2012 17:35:39
> > Objet: Re: [Linux-cluster] Quorum device brain the cluster when master
> lose network
> >
> >
> > can you send me t
> De: "emmanuel segura"
> À: "linux clustering"
> Envoyé: Lundi 30 Juillet 2012 17:35:39
> Objet: Re: [Linux-cluster] Quorum device brain the cluster when master
> lose network
>
>
> can you send me the ouput from cman_tool status
can you send me the ouput from cman_tool status? when the cluster it's
running
2012/7/30 GouNiNi
>
>
> - Mail original -
> > De: "Digimer"
> > À: "linux clustering"
> > Cc: "GouNiNi"
> > Envoyé: Lundi 30 Juillet
- Mail original -
> De: "Digimer"
> À: "linux clustering"
> Cc: "GouNiNi"
> Envoyé: Lundi 30 Juillet 2012 17:10:10
> Objet: Re: [Linux-cluster] Quorum device brain the cluster when master lose
> network
>
> On 07/30/2012 10:43
Hello GouNiNi
Don't use expected votes directive let the cluster calculate that, if you
wanna a cluster it's remain quorate with two nodes + quorum disk, the
quorum votes must be 2 votes
all votes = 6 : 6 - 2 = 4 and the resoult it's more then half
Sorry for my english, i hope the idea it's clea
On 07/30/2012 10:43 AM, GouNiNi wrote:
Hello,
I did some tests on 4 nodes cluster with quorum device and I find a bad
situation with one test, so I need your knowledges to correct my configuration.
Configuation:
4 nodes, all vote for 1
quorum device vote for 1 (to hold services with minimum 2
Hello,
I did some tests on 4 nodes cluster with quorum device and I find a bad
situation with one test, so I need your knowledges to correct my configuration.
Configuation:
4 nodes, all vote for 1
quorum device vote for 1 (to hold services with minimum 2 nodes up)
cman expected votes 5
Situatio
Hi,
replying to your original email ...
the problem i can see in the logs is the line:
openais[971]: [SYNC ] This node is within the primary component and will
provide service.
as you have expected_votes=2 and node votes=1 this shouldn't happen, so it
looks as a bug
P.S.
If you had fencing con
First of all thanks everybody for help me...
RHEL 5.5
rgmanager-2.0.52-6.0.1.el5
cman-2.0.115-34.el5
Distinti Saluti
Claudio Martin
Abilene Net Solutions S.r.l.
Il 31/05/2011 21.13, Digimer ha scritto:
On 05/31/2011 03:10 PM, Mark Hlawatschek wrote:
Martin,
I did some testings with RHEL5.6
On 05/31/2011 03:10 PM, Mark Hlawatschek wrote:
Martin,
I did some testings with RHEL5.6 and no additional asynchronous updates.
I remember that it worked as you expected. If rgmanager notices that quorum
dissolved, it triggers an emergency shutdown for all services running on the
nodes that l
Martin,
I did some testings with RHEL5.6 and no additional asynchronous updates.
I remember that it worked as you expected. If rgmanager notices that quorum
dissolved, it triggers an emergency shutdown for all services running on the
nodes that lost quorum.
Which version of rgmanager are you
On 05/31/2011 02:33 PM, Martin Claudio wrote:
I also plannig to implement some way to fencing nodes, but at the moment
it's only a simulation lab
Please read this:
http://wiki.alteeve.com/index.php/Red_Hat_Cluster_Service_2_Tutorial#Concept.3B_Fencing
Anyway i still have the problem, nod
Il 31/05/2011 19.05, Digimer ha scritto:
There are a couple of problems here; You need:
With a two-node, quorum is effectively useless, as a single node is
allowed to continue. Also, without proper fencing, things will not fail
properly. This means that you are in somewhat of an undefined a
On 05/31/2011 01:56 PM, Alan Brown wrote:
Digimer wrote:
With a two-node, quorum is effectively useless, as a single node is
allowed to continue.
That's what qdiskd is for. It's also useful in larger clusters.
Agreed, but there are 2 caveats that need addressing;
1. qdisk requires a SAN (D
Digimer wrote:
With a two-node, quorum is effectively useless, as a single node is
allowed to continue.
That's what qdiskd is for. It's also useful in larger clusters.
Also, without proper fencing, things will not fail
properly. This means that you are in somewhat of an undefined area.
Un
On 05/31/2011 12:22 PM, Martin Claudio wrote:
Hi,
i have a problem with a 2 node cluster with this conf:
There are a couple of problems here; You need:
With a two-node, quorum is effectively useless, as a single node is
allowed to continue. Also, without proper fencing, things w
Hi,
i have a problem with a 2 node cluster with this conf:
all is ok but when node 2 goes down quorum dissolved but resources is
not stopped, here
On Tue, Mar 15, 2011 at 12:11:41AM -0400, berg...@merctech.com wrote:
> I have been using a 2-node cluster with a quorum disk successfully for
> about 2 years. Beginning today, the cluster will not boot correctly.
>
> The RHCS services start, but fencing fails with:
>
> dlm: no local
The pithy ruminations from "Fabio M. Di Nitto" on "Re:
[Linux-cluster] quorum device not getting a vote causes 2-node cluster to be
inquorate" were:
=> On 03/15/2011 05:11 AM, berg...@merctech.com wrote:
=> > I have been using a 2-node cluster with a quorum disk
On 03/15/2011 05:11 AM, berg...@merctech.com wrote:
> I have been using a 2-node cluster with a quorum disk successfully for
> about 2 years. Beginning today, the cluster will not boot correctly.
>
> The RHCS services start, but fencing fails with:
>
> dlm: no local IP address has bee
I have been using a 2-node cluster with a quorum disk successfully for
about 2 years. Beginning today, the cluster will not boot correctly.
The RHCS services start, but fencing fails with:
dlm: no local IP address has been set
dlm: cannot start dlm lowcomms -107
This seem
esh susvirkar
Sent: Thursday, July 29, 2010 11:35 PM
To: linux clustering
Subject: Re: [Linux-cluster] Quorum not quorate on RHEL 4 U 8
You have attach cluster.conf file for virtual guest where you have form
"cluster without quorum disk"
Kindly attach cluster.conf file where you ha
te:
>>
>> Hi,
>>
>> So, what your suggestion?
>>
>>
>>
>> Thanks,
>>
>>
>>
>>
>>
>> From: linux-cluster-boun...@redhat.com
>> [mailto:linux-cluster-boun...@redhat.com] On Behalf Of Wahyu Darmawan
>> Sent:
nux-cluster-boun...@redhat.com] *On Behalf Of *Wahyu Darmawan
> *Sent:* Thursday, July 22, 2010 11:02 PM
>
> *To:* linux clustering
> *Subject:* Re: [Linux-cluster] Quorum not quorate on RHEL 4 U 8
>
>
>
> Yes correct. I use Conga for my clusterware.
>
>
>
>
r-boun...@redhat.com] *On Behalf Of *Wahyu Darmawan
> *Sent:* Thursday, July 22, 2010 11:02 PM
>
> *To:* linux clustering
> *Subject:* Re: [Linux-cluster] Quorum not quorate on RHEL 4 U 8
>
>
>
> Yes correct. I use Conga for my clusterware.
>
>
>
> Thanks,
>
Hi,
So, what your suggestion?
Thanks,
From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com] On Behalf Of Wahyu Darmawan
Sent: Thursday, July 22, 2010 11:02 PM
To: linux clustering
Subject: Re: [Linux-cluster] Quorum not quorate on RHEL 4 U 8
Yes correct. I use Conga
Yes correct. I use Conga for my clusterware.
Thanks,
From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com] On Behalf Of POWERBALL ONLINE
Sent: Thursday, July 22, 2010 10:54 PM
To: linux clustering
Subject: Re: [Linux-cluster] Quorum not quorate on RHEL 4 U 8
Hi
Hi
What tool you use for create cluster? Is it Conga ?
On Thu, Jul 22, 2010 at 12:29 PM, Wahyu Darmawan
wrote:
> Hi all,
>
> I have problem with my cluster. I have 2 nodes of my cluster.
>
> On July 12 the nodes missed connection from storage and lost 1 disk
> partition on both nodes. Then I
Hi all,
Any update?
Thank you.
From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com] On Behalf Of Wahyu Darmawan
Sent: Thursday, July 22, 2010 12:29 PM
To: linux-cluster@redhat.com
Subject: [Linux-cluster] Quorum not quorate on RHEL 4 U 8
Hi all,
I have problem
Hi all,
I have problem with my cluster. I have 2 nodes of my cluster.
On July 12 the nodes missed connection from storage and lost 1 disk partition
on both nodes. Then I decided to assigned a new of partition from storage, and
use it on my nodes.
But until today, the quorum is not going to quora
Greetings,
On Sat, Mar 6, 2010 at 4:45 AM, mogruith wrote:
> Hi all
>
>
> I have a special vlan dedicated for a heartbeat link between my two
> nodes.
This vlan should have multicast enabled. Some switches require
deliberate configurations as it may not come as default.
My 2p worth...
Regards
Hi all
Hi Brem and thanks for your answer.
When I moved manually my service, it work then I thought my quorum was
used. But it seems not in fact. Even if the case "has quorate" was
checked in system-config-cluster.
I have a special vlan dedicated for a heartbeat link between my two
nodes.
In f
On Fri, 2010-03-05 at 21:18 +0100, mogruith wrote:
> Hi all
>
> Today, my cluster crashed, then I have several questions to ask .
>
>
> - First of all, is there a kind of "heartbeat" on a quorum disk ? If
> yes, it means, I have two heartbeat on my cluster, one by the quorum,
> second one by a n
Hi all
Today, my cluster crashed, then I have several questions to ask .
- First of all, is there a kind of "heartbeat" on a quorum disk ? If
yes, it means, I have two heartbeat on my cluster, one by the quorum,
second one by a network link. Is it right ?
- How to set the heartbeat link in my c
Thanks Gordan!
You put me to the right direction.
Yes, we are driving an active/passive solution.
Each node can see the storage LUNs of both nodes on startup.
We sort them in an init script within the initrd (remove and re-add) to get a
well-defined order. Then we start multipathing via vpath. A
Thomas Meller wrote:
You're right, I am unclear.
Some years ago, we tried two versions: storage-based
mirroring and host-based mirroring. As the processes were
too complicated in our company we decided to mirror the
disks host-based. So currently there is a /dev/md0
(simplified) consisting of sd
eb 2010 15:18:07 +
> Von: Gordan Bobic
> An: linux clustering
> Betreff: Re: [Linux-cluster] Quorum disk over RAID software device
> Thomas Meller wrote:
> > Many thanks, Gordan.
> >
> > This could nearly be the solution.
> > But as I understand, it
Thomas Meller wrote:
Many thanks, Gordan.
This could nearly be the solution.
But as I understand, it's not possible to mirror the root-(g)fs to another
computing center despite for relying on a new SPOF (if at all possible)
or on hardware-dependent solutions.
Not sure I follow what you mean. I
the initrd?
Thanks again,
Thomas
Original-Nachricht
> Datum: Thu, 04 Feb 2010 18:59:52 +
> Von: Gordan Bobic
> An: linux clustering
> Betreff: Re: [Linux-cluster] Quorum disk over RAID software device
> Thomas,
>
> It sounds like what you're look
Thomas,
It sounds like what you're looking for is Open Shared Root:
http://www.open-sharedroot.org/
Gordan
Thomas Meller wrote:
Just found this thread while searching for illumination.
We are running a self-constructed cluster setup since 4 Years on a 2.4 kernel
(RHAS3).
It's a two-machine s
Just found this thread while searching for illumination.
We are running a self-constructed cluster setup since 4 Years on a 2.4 kernel
(RHAS3).
It's a two-machine setup, nothing fancy.
The fancy thing is that we boot the nodes in different computing centers from
mirrored SAN devices which are ru
Hi Brem
El mié, 16-12-2009 a las 20:41 +0100, brem belguebli escribió:
> In my multipath setup I use the following :
>
> polling_interval3 (checks the storage every 3 seconds)
> no_path_retry 5 (will check 5 times the path if failure happens on
> it, making it last scsi_timer (/sys/b
In my multipath setup I use the following :
polling_interval3 (checks the storage every 3 seconds)
no_path_retry 5 (will check 5 times the path if failure happens on
it, making it last scsi_timer (/sys/block/sdXX/device/timeout) + 5*3
secondes )
path_grouping_policymultibus (to l
Rafael,
What ou have to take care about is the following.
Imagine your SAN admin modifies the wrong zoning while doing his job,
making the qdisk (both legs) unavailable for your nodes, and at this
time you have one node off because of maintenance operation, your
whole cluster would go down.
Brem
Hi Brem
El mar, 15-12-2009 a las 21:15 +0100, brem belguebli escribió:
> Hi Rafael,
>
> I can already predict what is going to happen during your test
>
> I one of your nodes looses only 1 leg of your mirrored qdisk (either
> with mdadm or lvm), the qdisk will still be active from the point of
>
Hi Kaloyan
El mié, 16-12-2009 a las 13:41 +0200, Kaloyan Kovachev escribió:
> About the 6 node cluster - do you really need to have it operational with just
> a single node? If this is not mandatory it might be better to use different
> votes for the nodes to break the tie instead of mirrored qdi
On Wed, 16 Dec 2009 01:02:19 +0100, Jakov Sosic wrote
> On Tue, 2009-12-15 at 19:51 +0100, Rafael [UTF-8?]MicГі Miranda wrote:
>
> >
[1]
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Logical_Volume_Manager_Administration/mirrored_volumes.html
> >
http://www.redhat.com/docs/en
On Tue, 2009-12-15 at 19:51 +0100, Rafael Micó Miranda wrote:
> http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Logical_Volume_Manager_Administration/mirrored_volumes.html
> http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Logical_Volume_Manager_Administration/
Hi Rafael,
I can already predict what is going to happen during your test
I one of your nodes looses only 1 leg of your mirrored qdisk (either
with mdadm or lvm), the qdisk will still be active from the point of
view of this particular node, so nothing will happen.
What you should consider is
1
Am 15.12.2009 20:01, schrieb Rafael Micó Miranda:
in a similar situation I am using a raid-1 device (built with mdadm
prior to the startup of cman/rgmanager) which consists of two luns, one
in each location. This works pretty well as quorum-device.
--
Linux-cluster mailing list
Linux-cluster@re
Hi Jacov
El mar, 15-12-2009 a las 17:26 +0100, Jakov Sosic escribió:
> On Tue, 2009-12-15 at 15:31 +0100, Andreas Pfaffeneder wrote:
>
> > in a similar situation I am using a raid-1 device (built with mdadm
> > prior to the startup of cman/rgmanager) which consists of two luns, one
> > in each
Hi Brem
El mar, 15-12-2009 a las 17:21 +0100, brem belguebli escribió:
> Hi,
>
> The problem you could encounter is the network and storage split brain.
>
> If your Qdsik LUNs were hosted by 2 arrays located in 2 different
> rooms or site, each room hosting half the nodes of your cluster, in
> c
Hi Andreas
El mar, 15-12-2009 a las 15:31 +0100, Andreas Pfaffeneder escribió:
> Hi Rafael,
>
> Am 14.12.2009 23:15, schrieb Rafael Micó Miranda:
> > Hi all,
> >
> > I was wondering if there is a way to achieve a "quorum disk over a RAID
> > software device" working CMAN cluster.
> >
> >
> in
Hi Jakov,
El mar, 15-12-2009 a las 11:58 +0100, Jakov Sosic escribió:
> On Mon, 2009-12-14 at 23:15 +0100, Rafael Micó Miranda wrote:
>
> > - Using an LVM-Mirror device as a Qdisk and creating additional LUNs for
> > mirror and log in both storage arrays: if the Qdisk is a Clustered
> > Logical V
On Tue, 2009-12-15 at 15:31 +0100, Andreas Pfaffeneder wrote:
> in a similar situation I am using a raid-1 device (built with mdadm
> prior to the startup of cman/rgmanager) which consists of two luns, one
> in each location. This works pretty well as quorum-device.
So you have to create mdraid
Hi,
The problem you could encounter is the network and storage split brain.
If your Qdsik LUNs were hosted by 2 arrays located in 2 different
rooms or site, each room hosting half the nodes of your cluster, in
case a SAN and network partition occurs between the 2 rooms, you'll
find yourself in a
Hi Rafael,
Am 14.12.2009 23:15, schrieb Rafael Micó Miranda:
Hi all,
I was wondering if there is a way to achieve a "quorum disk over a RAID
software device" working CMAN cluster.
in a similar situation I am using a raid-1 device (built with mdadm
prior to the startup of cman/rgmanager) w
On Mon, 2009-12-14 at 23:15 +0100, Rafael Micó Miranda wrote:
> - Using an LVM-Mirror device as a Qdisk and creating additional LUNs for
> mirror and log in both storage arrays: if the Qdisk is a Clustered
> Logical Volume,
But is it possible to have clustered LVM-mirror? And if so, how? I would
Hi all,
I was wondering if there is a way to achieve a "quorum disk over a RAID
software device" working CMAN cluster.
Explanation:
A) Environment
- 6 x different servers used as cluster nodes, with dual FC HBA
- 2 x different fabrics, each build with 3 FC SAN switches
- 2 x storage arrays, with
On Wed, 2009-11-18 at 11:08 +, Karl Podesta wrote:
> On Wed, Nov 18, 2009 at 06:32:25AM +0100, Fabio M. Di Nitto wrote:
> > > Apologies if a similar question has been asked in the past, any inputs,
> > > thoughts, or pointers welcome.
> >
> > Ideally you would find a way to plug the storage
On Wed, Nov 18, 2009 at 06:32:25AM +0100, Fabio M. Di Nitto wrote:
> > Apologies if a similar question has been asked in the past, any inputs,
> > thoughts, or pointers welcome.
>
> Ideally you would find a way to plug the storage into the 2 nodes that
> do not have it now, and then run qdisk on
Karl Podesta wrote:
> Hi there,
>
> Is it possible to have a quorum disk, applicable only to 2 nodes
> out of a 4 node cluster?
No. The prerequisite for qdisk to work is for all nodes in a cluster to
have it running at the same time.
> This architecture should really be two clusters, right? One
Hi there,
Is it possible to have a quorum disk, applicable only to 2 nodes
out of a 4 node cluster? (i.e. with the other 2 nodes not connected
to the shared quorum disk storage, or not affected by failover or
service operation on the 2 nodes that are sharing a disk?)
I have encountered the follo
On Tue, 2009-09-15 at 07:57 -0400, James Marcinek wrote:
> Hello all,
>
> I have several clusters which have been built using system-config-cluster.
>
> I would like to now add a quorum disk and possibly a multi-cast address to
> the cluster as well. Can someone tell me how to go about this usin
Hello all,
I have several clusters which have been built using system-config-cluster.
I would like to now add a quorum disk and possibly a multi-cast address to the
cluster as well. Can someone tell me how to go about this using
system-config-cluster? I've tried looking over the tool but cannot
On Tue, 2009-08-11 at 14:30 -0500, Paras pradhan wrote:
> I have a 3 nodes xen cluser under centos5.3 that host linux virtual
> machines. Node1 and Node2 got virtual machines but not on Node3. Node
> 3 is basically used when Node1 or Node2 goes down. This is working OK.
> Now when I turn off both
I have a 3 nodes xen cluser under centos5.3 that host linux virtual
machines. Node1 and Node2 got virtual machines but not on Node3. Node
3 is basically used when Node1 or Node2 goes down. This is working OK.
Now when I turn off both Node1 and Node2. Then quorum is dissolved
and my cluser fails. I
> Correct me if I'm wrong, but Red Hat does not officially support clusters
> with quorum disks, with more than 16 nodes.
>
> Regards,
> Juanra
>
>>
>>
Hi Juanra, no idea about this limit, my numbers was only to ask what happens
if you need more
Greetings,
ESG
--
Linux-cluster mailing list
Li
On Mon, Jun 29, 2009 at 11:48 AM, ESGLinux wrote:
> hi,
> Thanks for your quick answer.
>
> Just for curiosity, why this size? and with 10 MB, what happens if you need
> more? (the question is why can you need more? perhaps 1000 nodes? or it
> doesnt matter)
>
Correct me if I'm wrong, but Red Hat
hi,
Thanks for your quick answer.
Just for curiosity, why this size? and with 10 MB, what happens if you need
more? (the question is why can you need more? perhaps 1000 nodes? or it
doesnt matter)
Greetings,
ESG
2009/6/29 H.Päiväniemi
>
> http://sources.redhat.com/cluster/wiki/FAQ/CMAN#quorum
http://sources.redhat.com/cluster/wiki/FAQ/CMAN#quorumdisksize
What's the minimum size of a quorum disk/partition?
The official answer is 10MB. The real number is something like 100KB, but we'd
like to reserve 10MB for possible
future expansion and features.
-hjp
On Monday 29 June 2009 1
Hi all,
I´m planning a 2 nodes cluster and I´m going to use quorum disk. My question
is which is the best size of this kind of disk. It will be interesting to
explain how calculate this size,
Thanks in advance
ESG
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mail
On Wed, Apr 22, 2009 at 9:45 PM, Alex Kompel wrote:
> It appears that it took 20 sec for path to fail over. quorumd tko is 10 sec
> by default. You may want to reduce HBA timeout or tweak tko for quorumd.
> Basically you want to set all cluster timeouts to exceed expected failover
> time of lower-
It appears that it took 20 sec for path to fail over. quorumd tko is 10 sec
by default. You may want to reduce HBA timeout or tweak tko for quorumd.
Basically you want to set all cluster timeouts to exceed expected failover
time of lower-level systems.
-Alex
On Wed, Apr 22, 2009 at 4:31 PM, Flavio
Hi folks,
I'm trying to configure a 2-node cluster using quorum disk as tie-breaker.
I'm getting a problem when my active I/O path for quorum disk goes
down (I'm testing turn off one (of two) SAN fiber switches), so one
node is being fenced.
I believe this is not right or can have a better way to
cluster can make the difference, or if it is the
right way to do.
Vu
-Original Message-
From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com] On Behalf Of Vu Pham
Sent: Wednesday, March 18, 2009 7:14 PM
To: linux clustering
Subject: Re: [Linux-cluster] quorum
clustering
Subject: Re: [Linux-cluster] quorum disk votes
Hunt, Gary wrote:
> Is there a way to get a cluster node to recognize that the number of
> votes a quorum disk gets has changed? I added a new node to the cluster
> and updated the cluster.conf to reflect the changes and propagated it.
Hunt, Gary wrote:
Is there a way to get a cluster node to recognize that the number of
votes a quorum disk gets has changed? I added a new node to the cluster
and updated the cluster.conf to reflect the changes and propagated it.
In this case I went from 3 total votes and a quorum disk vote
Is there a way to get a cluster node to recognize that the number of votes a
quorum disk gets has changed? I added a new node to the cluster and updated
the cluster.conf to reflect the changes and propagated it. In this case I went
from 3 total votes and a quorum disk vote of 1 to 5 total vote
: linux clustering
Subject: Re: [Linux-cluster] Quorum Concept
On 3/7/09 4:20 AM, vishal bordia wrote:
Dear,
let me know how can i add a storge quorum in an existing cluster.
i have an existing cluster of RHEL4.5 Servers
--
Regards,
Vishal Bordia
HCL Infosystems Ltd.
Mob :+91-9216883922
On 3/7/09 4:20 AM, vishal bordia wrote:
Dear,
let me know how can i add a storge quorum in an existing cluster.
i have an existing cluster of RHEL4.5 Servers
--
Regards,
Vishal Bordia
HCL Infosystems Ltd.
Mob :+91-9216883922
Dear,
let me know how can i add a storge quorum in an existing cluster.
i have an existing cluster of RHEL4.5 Servers
--
Regards,
Vishal Bordia
HCL Infosystems Ltd.
Mob :+91-9216883922
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com] On Behalf Of Tomasz Sucharzewski
Sent: Wednesday, February 18, 2009 1:57 PM
To: linux clustering
Subject: Re: [Linux-cluster] Quorum disk
I had the same issue and I solved it,
Just increase quorum check interval. 2 seconds is to less
clustering
Subject: Re: [Linux-cluster] Quorum disk
I had the same issue and I solved it,
Just increase quorum check interval. 2 seconds is to less to inform cman about
quorum status.
I had to increase it to 7 seconds but remember it also influence cman timeout
which must be verified.
Best regards,
Tomek
I had the same issue and I solved it,
Just increase quorum check interval. 2 seconds is to less to inform
cman about quorum status.
I had to increase it to 7 seconds but remember it also influence cman
timeout which must be verified.
Best regards,
Tomek
On Feb 17, 2009, at 9:12 PM, Hunt, Ga
Having an issue with my 2 node cluster. Think it is related to the quorum disk.
2 node RHEL 5.3 cluster with quorum disk. Virtual servers running on each node.
Whenever node1 takes over the master role in qdisk it looses quorum and
restarts all the virtual servers. It does regain quorum a few
1 - 100 of 124 matches
Mail list logo