Thanks Gordan!
You put me to the right direction.
Yes, we are driving an active/passive solution.
Each node can see the storage LUNs of both nodes on startup.
We sort them in an init script within the initrd (remove and re-add) to get a
well-defined order. Then we start multipathing via vpath. A
Thomas Meller wrote:
You're right, I am unclear.
Some years ago, we tried two versions: storage-based
mirroring and host-based mirroring. As the processes were
too complicated in our company we decided to mirror the
disks host-based. So currently there is a /dev/md0
(simplified) consisting of sd
eb 2010 15:18:07 +
> Von: Gordan Bobic
> An: linux clustering
> Betreff: Re: [Linux-cluster] Quorum disk over RAID software device
> Thomas Meller wrote:
> > Many thanks, Gordan.
> >
> > This could nearly be the solution.
> > But as I understand, it
Thomas Meller wrote:
Many thanks, Gordan.
This could nearly be the solution.
But as I understand, it's not possible to mirror the root-(g)fs to another
computing center despite for relying on a new SPOF (if at all possible)
or on hardware-dependent solutions.
Not sure I follow what you mean. I
the initrd?
Thanks again,
Thomas
Original-Nachricht
> Datum: Thu, 04 Feb 2010 18:59:52 +
> Von: Gordan Bobic
> An: linux clustering
> Betreff: Re: [Linux-cluster] Quorum disk over RAID software device
> Thomas,
>
> It sounds like what you're look
Thomas,
It sounds like what you're looking for is Open Shared Root:
http://www.open-sharedroot.org/
Gordan
Thomas Meller wrote:
Just found this thread while searching for illumination.
We are running a self-constructed cluster setup since 4 Years on a 2.4 kernel
(RHAS3).
It's a two-machine s
Just found this thread while searching for illumination.
We are running a self-constructed cluster setup since 4 Years on a 2.4 kernel
(RHAS3).
It's a two-machine setup, nothing fancy.
The fancy thing is that we boot the nodes in different computing centers from
mirrored SAN devices which are ru
Hi Brem
El mié, 16-12-2009 a las 20:41 +0100, brem belguebli escribió:
> In my multipath setup I use the following :
>
> polling_interval3 (checks the storage every 3 seconds)
> no_path_retry 5 (will check 5 times the path if failure happens on
> it, making it last scsi_timer (/sys/b
In my multipath setup I use the following :
polling_interval3 (checks the storage every 3 seconds)
no_path_retry 5 (will check 5 times the path if failure happens on
it, making it last scsi_timer (/sys/block/sdXX/device/timeout) + 5*3
secondes )
path_grouping_policymultibus (to l
Rafael,
What ou have to take care about is the following.
Imagine your SAN admin modifies the wrong zoning while doing his job,
making the qdisk (both legs) unavailable for your nodes, and at this
time you have one node off because of maintenance operation, your
whole cluster would go down.
Brem
Hi Brem
El mar, 15-12-2009 a las 21:15 +0100, brem belguebli escribió:
> Hi Rafael,
>
> I can already predict what is going to happen during your test
>
> I one of your nodes looses only 1 leg of your mirrored qdisk (either
> with mdadm or lvm), the qdisk will still be active from the point of
>
Hi Kaloyan
El mié, 16-12-2009 a las 13:41 +0200, Kaloyan Kovachev escribió:
> About the 6 node cluster - do you really need to have it operational with just
> a single node? If this is not mandatory it might be better to use different
> votes for the nodes to break the tie instead of mirrored qdi
On Wed, 16 Dec 2009 01:02:19 +0100, Jakov Sosic wrote
> On Tue, 2009-12-15 at 19:51 +0100, Rafael [UTF-8?]MicГі Miranda wrote:
>
> >
[1]
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Logical_Volume_Manager_Administration/mirrored_volumes.html
> >
http://www.redhat.com/docs/en
On Tue, 2009-12-15 at 19:51 +0100, Rafael Micó Miranda wrote:
> http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Logical_Volume_Manager_Administration/mirrored_volumes.html
> http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Logical_Volume_Manager_Administration/
Hi Rafael,
I can already predict what is going to happen during your test
I one of your nodes looses only 1 leg of your mirrored qdisk (either
with mdadm or lvm), the qdisk will still be active from the point of
view of this particular node, so nothing will happen.
What you should consider is
1
Am 15.12.2009 20:01, schrieb Rafael Micó Miranda:
in a similar situation I am using a raid-1 device (built with mdadm
prior to the startup of cman/rgmanager) which consists of two luns, one
in each location. This works pretty well as quorum-device.
--
Linux-cluster mailing list
Linux-cluster@re
Hi Jacov
El mar, 15-12-2009 a las 17:26 +0100, Jakov Sosic escribió:
> On Tue, 2009-12-15 at 15:31 +0100, Andreas Pfaffeneder wrote:
>
> > in a similar situation I am using a raid-1 device (built with mdadm
> > prior to the startup of cman/rgmanager) which consists of two luns, one
> > in each
Hi Brem
El mar, 15-12-2009 a las 17:21 +0100, brem belguebli escribió:
> Hi,
>
> The problem you could encounter is the network and storage split brain.
>
> If your Qdsik LUNs were hosted by 2 arrays located in 2 different
> rooms or site, each room hosting half the nodes of your cluster, in
> c
Hi Andreas
El mar, 15-12-2009 a las 15:31 +0100, Andreas Pfaffeneder escribió:
> Hi Rafael,
>
> Am 14.12.2009 23:15, schrieb Rafael Micó Miranda:
> > Hi all,
> >
> > I was wondering if there is a way to achieve a "quorum disk over a RAID
> > software device" working CMAN cluster.
> >
> >
> in
Hi Jakov,
El mar, 15-12-2009 a las 11:58 +0100, Jakov Sosic escribió:
> On Mon, 2009-12-14 at 23:15 +0100, Rafael Micó Miranda wrote:
>
> > - Using an LVM-Mirror device as a Qdisk and creating additional LUNs for
> > mirror and log in both storage arrays: if the Qdisk is a Clustered
> > Logical V
On Tue, 2009-12-15 at 15:31 +0100, Andreas Pfaffeneder wrote:
> in a similar situation I am using a raid-1 device (built with mdadm
> prior to the startup of cman/rgmanager) which consists of two luns, one
> in each location. This works pretty well as quorum-device.
So you have to create mdraid
Hi,
The problem you could encounter is the network and storage split brain.
If your Qdsik LUNs were hosted by 2 arrays located in 2 different
rooms or site, each room hosting half the nodes of your cluster, in
case a SAN and network partition occurs between the 2 rooms, you'll
find yourself in a
Hi Rafael,
Am 14.12.2009 23:15, schrieb Rafael Micó Miranda:
Hi all,
I was wondering if there is a way to achieve a "quorum disk over a RAID
software device" working CMAN cluster.
in a similar situation I am using a raid-1 device (built with mdadm
prior to the startup of cman/rgmanager) w
On Mon, 2009-12-14 at 23:15 +0100, Rafael Micó Miranda wrote:
> - Using an LVM-Mirror device as a Qdisk and creating additional LUNs for
> mirror and log in both storage arrays: if the Qdisk is a Clustered
> Logical Volume,
But is it possible to have clustered LVM-mirror? And if so, how? I would
On Wed, 2009-11-18 at 11:08 +, Karl Podesta wrote:
> On Wed, Nov 18, 2009 at 06:32:25AM +0100, Fabio M. Di Nitto wrote:
> > > Apologies if a similar question has been asked in the past, any inputs,
> > > thoughts, or pointers welcome.
> >
> > Ideally you would find a way to plug the storage
On Wed, Nov 18, 2009 at 06:32:25AM +0100, Fabio M. Di Nitto wrote:
> > Apologies if a similar question has been asked in the past, any inputs,
> > thoughts, or pointers welcome.
>
> Ideally you would find a way to plug the storage into the 2 nodes that
> do not have it now, and then run qdisk on
Karl Podesta wrote:
> Hi there,
>
> Is it possible to have a quorum disk, applicable only to 2 nodes
> out of a 4 node cluster?
No. The prerequisite for qdisk to work is for all nodes in a cluster to
have it running at the same time.
> This architecture should really be two clusters, right? One
On Tue, 2009-09-15 at 07:57 -0400, James Marcinek wrote:
> Hello all,
>
> I have several clusters which have been built using system-config-cluster.
>
> I would like to now add a quorum disk and possibly a multi-cast address to
> the cluster as well. Can someone tell me how to go about this usin
> Correct me if I'm wrong, but Red Hat does not officially support clusters
> with quorum disks, with more than 16 nodes.
>
> Regards,
> Juanra
>
>>
>>
Hi Juanra, no idea about this limit, my numbers was only to ask what happens
if you need more
Greetings,
ESG
--
Linux-cluster mailing list
Li
On Mon, Jun 29, 2009 at 11:48 AM, ESGLinux wrote:
> hi,
> Thanks for your quick answer.
>
> Just for curiosity, why this size? and with 10 MB, what happens if you need
> more? (the question is why can you need more? perhaps 1000 nodes? or it
> doesnt matter)
>
Correct me if I'm wrong, but Red Hat
hi,
Thanks for your quick answer.
Just for curiosity, why this size? and with 10 MB, what happens if you need
more? (the question is why can you need more? perhaps 1000 nodes? or it
doesnt matter)
Greetings,
ESG
2009/6/29 H.Päiväniemi
>
> http://sources.redhat.com/cluster/wiki/FAQ/CMAN#quorum
http://sources.redhat.com/cluster/wiki/FAQ/CMAN#quorumdisksize
What's the minimum size of a quorum disk/partition?
The official answer is 10MB. The real number is something like 100KB, but we'd
like to reserve 10MB for possible
future expansion and features.
-hjp
On Monday 29 June 2009 1
On Wed, Apr 22, 2009 at 9:45 PM, Alex Kompel wrote:
> It appears that it took 20 sec for path to fail over. quorumd tko is 10 sec
> by default. You may want to reduce HBA timeout or tweak tko for quorumd.
> Basically you want to set all cluster timeouts to exceed expected failover
> time of lower-
It appears that it took 20 sec for path to fail over. quorumd tko is 10 sec
by default. You may want to reduce HBA timeout or tweak tko for quorumd.
Basically you want to set all cluster timeouts to exceed expected failover
time of lower-level systems.
-Alex
On Wed, Apr 22, 2009 at 4:31 PM, Flavio
cluster can make the difference, or if it is the
right way to do.
Vu
-Original Message-
From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com] On Behalf Of Vu Pham
Sent: Wednesday, March 18, 2009 7:14 PM
To: linux clustering
Subject: Re: [Linux-cluster] quorum
clustering
Subject: Re: [Linux-cluster] quorum disk votes
Hunt, Gary wrote:
> Is there a way to get a cluster node to recognize that the number of
> votes a quorum disk gets has changed? I added a new node to the cluster
> and updated the cluster.conf to reflect the changes and propagated it.
Hunt, Gary wrote:
Is there a way to get a cluster node to recognize that the number of
votes a quorum disk gets has changed? I added a new node to the cluster
and updated the cluster.conf to reflect the changes and propagated it.
In this case I went from 3 total votes and a quorum disk vote
-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com] On Behalf Of Tomasz Sucharzewski
Sent: Wednesday, February 18, 2009 1:57 PM
To: linux clustering
Subject: Re: [Linux-cluster] Quorum disk
I had the same issue and I solved it,
Just increase quorum check interval. 2 seconds is to less
clustering
Subject: Re: [Linux-cluster] Quorum disk
I had the same issue and I solved it,
Just increase quorum check interval. 2 seconds is to less to inform cman about
quorum status.
I had to increase it to 7 seconds but remember it also influence cman timeout
which must be verified.
Best regards,
Tomek
I had the same issue and I solved it,
Just increase quorum check interval. 2 seconds is to less to inform
cman about quorum status.
I had to increase it to 7 seconds but remember it also influence cman
timeout which must be verified.
Best regards,
Tomek
On Feb 17, 2009, at 9:12 PM, Hunt, Ga
On Tue, Oct 23, 2007 at 05:14:58PM -0400, Lon Hohberger wrote:
> I see it - it looks like stop_cman only applies if qdiskd can't reach
> the disk, not if the heuristics are bad.
>
> This should make it kill CMAN if heuristics are bad too if stop_cman is
> set.
It's long ago, but... will the patc
On Mon, 2007-10-22 at 19:43 +0200, Jos Vos wrote:
> On Mon, Oct 15, 2007 at 11:37:32AM -0400, Lon Hohberger wrote:
>
> > On Wed, 2007-10-10 at 20:56 +0200, Jos Vos wrote:
>
> > > Now, this all works fine, cman_tool shows what I expected and when I
> > > remove the file /tmp/qdisk on a node, that
On Mon, Oct 15, 2007 at 11:37:32AM -0400, Lon Hohberger wrote:
> On Wed, 2007-10-10 at 20:56 +0200, Jos Vos wrote:
> > Now, this all works fine, cman_tool shows what I expected and when I
> > remove the file /tmp/qdisk on a node, that node reboots instantaneously.
> >
> > However, after the rebo
On Mon, 2007-10-22 at 09:15 +0200, Reiner Rottmann wrote:
> Hello,
>
> I configured quorum disk for a two node cluster and get "Host: (none)" as
> output from mkqdisk. What does that mean?
It didn't write the hostname to the qdisk header block for some
reason.
Nothing to worry about; the host
On Mon, Oct 15, 2007 at 11:37:32AM -0400, Lon Hohberger wrote:
> > However, after the reboot, while the file tested in the heuristic does
> > still not exist, the node is joining the cluster again and starts some
> > cluster services!
>
> Yup.
>
> Add stop_cman="1" to
I already tried so, witho
On Wed, 2007-10-10 at 20:56 +0200, Jos Vos wrote:
> Hi,
>
> On (RHEL4 U5) 3-node test cluster I have defined a quorum disk with a
> test heuristic as follows:
>
>
>
>
>
>
>
> The idea is (when replacing the heuristic with one or more "real"
> heuristics) that whe
46 matches
Mail list logo