Do you have a different auth key on each node by any chance?
On 2014-06-12 17:29, Arun G Nair wrote:
We have multicast enabled on the switch. I've also tried the
multicast.py tool from RH's knowledge base to test multicast and I see
the expected output, though the tool uses a different multica
Hi,
there should be openldap resource in your cluster, but if not you can
always use a script resource or write your own.
On Thu, 24 Jan 2013 14:49:45 -0800, Rick Stevens
wrote:
> On 01/24/2013 01:57 PM, Dryden, Tom issued this missive:
>>
>> Good Afternoon,
>>
>> There are a couple of reasons
Hi,
On Wed, 1 Aug 2012 22:54:49 +0800 (SGT), Zama Ques
wrote:
>
> =
> Cluster Name: ClusterA
>
> Node1: system1.example.com Priority:1 in Failover Domain
> Node2: system2.example.com Priority:2 in Failover Domain
>
> File System Resource : /data1 - An ext3 file system
>
> =
>
>
On Wed, 6 Jun 2012 21:12:13 -0700 (PDT), Eric
wrote:
> I'm currently using the HP Procurve 2824 24-port Gigabit Ethernet switch
> to for a backside network for synchronizing file systems between the
nodes
> in the group. Each host has 4 Gigabit NIC's and the goal is to bond two
of
> the Gigabit NI
Hi,
On Wed, 21 Mar 2012 11:43:26 +0100, Nicolas Ecarnot
wrote:
> Hi,
>
> We are setting up a new cluster and we still have tests and questions.
> At present, our cluster is two nodes only, with a very simple setup.
> fencing is done with fence_ipmilan, and the only action we do is
rebooting.
> To
On Thu, 09 Feb 2012 15:52:57 +, Alan Brown
wrote:
> On 09/02/12 15:14, Ray Van Dolson wrote:
>
>> I'm exploring some options for speeding that up -- the main one being
>> dropping my cluster to only one node. Is this doable for a file system
>> that was greated with the dlm lock manager inst
On Thu, 26 Jan 2012 08:29:01 -0500, Digimer wrote:
> On 01/26/2012 07:43 AM, jayesh.shi...@netcore.co.in wrote:
>> Dear Digimer & Kaloyan Kovachev ,
>>
>> Do u think this server shutdown problem ( while fencing simultaneously
>> from both node via drbd.conf) can be
On Thu, 26 Jan 2012 18:13:53 +0530, jayesh.shi...@netcore.co.in wrote:
> Dear Digimer & Kaloyan Kovachev ,
>
> Do u think this server shutdown problem ( while fencing simultaneously
> from both node via drbd.conf) can be completely avoid if I use SAN disk
> instead of DRBD
On Wed, 25 Jan 2012 19:27:28 +0530, "jayesh.shinde"
wrote:
> Hi Kaloyan Kovachev ,
>
> I am using below config in drbd.conf which is mention on DRBD
cookbook.
>
> }
>disk {
> fencing resource-and-stonith;
>}
>handlers {
> out
>
>
> fstype="ext3" mountpoint="/mount/path" name="imap1_fs" options="rw"
> self_fence="1"/>
You have self_fence, which should reboot the node instead of power off,
but as you are using drbd - the power off may be caused from drbd instead
(check drbd.conf)
>
>
In either case if the remote
Hi,
check /etc/sysconfig/cman maybe there is a different name present as
NODENAME ... remove the file (if present) or try to create one as:
#CMAN_CLUSTER_TIMEOUT=120
#CMAN_QUORUM_TIMEOUT=0
#CMAN_SHUTDOWN_TIMEOUT=60
FENCED_START_TIMEOUT=120
##FENCE_JOIN=no
#LOCK_FILE="/var/lock/subsys/cman"
CLUSTE
On Wed, 9 Nov 2011 10:19:13 -0500, "Nicolas Ross"
wrote:
>> On Wed, 9 Nov 2011 09:42:17 -0500, "Nicolas Ross"
>> wrote:
It will help to avoid expensive DLM locking if yoo mount the snapshot
with
local locking.
>>>
>>> Sorry, you lost me, isn't dlm locking for gfs ?
>>>
>>
>>>Fro
On Wed, 9 Nov 2011 09:42:17 -0500, "Nicolas Ross"
wrote:
>> It will help to avoid expensive DLM locking if yoo mount the snapshot
>> with
>> local locking.
>
> Sorry, you lost me, isn't dlm locking for gfs ?
>
>From your first email:
"We are curently using RHEL 6.1 with GFS2 file systems on top
On Mon, 7 Nov 2011 16:21:59 -0500, "Nicolas Ross"
wrote:
>> | Is it possible to make snapshots in a *cluster* LVM environment ?
>> | Last time I
>> | read the manual it was not possible.
>>
>> I highly suspect Nick was talking about _hardware_ snapshotting that is
>> supported by some SANs, _not_
Hi,
On Fri, 4 Nov 2011 14:05:34 -0400, "Nicolas Ross"
wrote:
> Hi !
>
> We are curently using RHEL 6.1 with GFS2 file systems on top of
> fiber-channel stoarage for our cluster. All fs' are in lv's, with clvmd.
>
As they are LV's, you may try to make a snapshot and then mount it with
lock_nol
On Tue, 01 Nov 2011 09:03:41 -0400 (EDT), Bob Peterson
wrote:
> - Original Message -
> | Which one is not true "I had used common storage" or "On both the
> | nodes
> | data is not in sync" - if it is a common storage the data is the
> | same?
> |
> | if you are using GFS2 without a clust
On Tue, 1 Nov 2011 11:29:56 +0200, wrote:
> Hi,
>
> Following is my setup -
>
> Redhat -6.0 ==> 64-bit
> Cluster configuration using LUCI.
>
> I had setup 2 node cluster Load Balancing Cluster having Mysql service
> active on both the nodes using different Failover Domain.
> Node1 [Mysql-1 runn
Hi,
On Mon, 31 Oct 2011 12:27:47 +0200, wrote:
> Hi Team,
>
> Can anyone please let me know that weather is it possible to implement
> MYSQL Load Balancing [i.e. active-active or Master-Master] cluster in
> Redhat Linux 6.0 using GFS2 as a shared storage ???
> Any Step by Step guide will be appr
Hi,
> Is this possible? I would think that if a node has properly/cleanly left
> the cluster, locks that were held by that node would be released. Is
there
> a way to display locks that may be still existing for that node that is
> down? And lastly, is there a way to force the release of those loc
Hi,
post_join_delay is the wrong parameter to change. You will need to change
the 'shutdown_wait' or 'stop' timeout for the resource
On Wed, 03 Aug 2011 16:58:43 +0530, "jayesh.shinde"
wrote:
> Hi All ,
>
> I have query about the "clusvcadm" and "post_join_delay"
> I am using the high traffic M
Hi,
On Tue, 7 Jun 2011 11:57:02 -0700 (PDT), Srija
wrote:
> Hi Kaloyan
>
>> --- On Fri, 6/3/11, Kaloyan Kovachev
>> wrote:
>>
>> >
>> > to use broadcast (if private addresses are in the
>> same
>> > VLAN/subnet) you
>> &
Hi,
On Thu, 2 Jun 2011 08:37:07 -0700 (PDT), Srija
wrote:
> Thank you so much for your reply again.
>
> --- On Tue, 5/31/11, Kaloyan Kovachev wrote:
> Thanks for your reply again.
>
>
> >
>> If it is a switch restart you will have in your logs the
>>
Hi,
replying to your original email ...
the problem i can see in the logs is the line:
openais[971]: [SYNC ] This node is within the primary component and will
provide service.
as you have expected_votes=2 and node votes=1 this shouldn't happen, so it
looks as a bug
P.S.
If you had fencing con
)
> thanks again .
> regards
>
> --- On Mon, 5/30/11, Kaloyan Kovachev wrote:
>
>> From: Kaloyan Kovachev
>> Subject: Re: [Linux-cluster] Cluster environment issue
>> To: "linux clustering"
>> Date: Monday, May 30, 2011, 4:05 PM
>> Hi,
>&
Hi,
when your cluster gets broken, most likely the reason is, there is a
network problem (switch restart or multicast traffic is lost for a while)
on the interface where serverX-priv IPs are configured. Having a quorum
disk may help by giving a quorum vote to one of the servers, so it can
fence th
On Mon, 30 May 2011 11:06:45 -0400, Digimer wrote:
> On 05/30/2011 10:49 AM, Hiroyuki Sato wrote:
>> Hello Digimer.
>>
>> Thank you for your advice.
>> It is very very useful information for me.
>>
>>> a) Forcing a node to power off, or does it just start an ACPI
shutdown?
>>
>> Maybe ok. I'll tes
Dear Ganesh,
you do not need to assign the IP via script and there is nothing more to
do with cluster.conf than listing the IP's.
The IP is added to the interface (physical, VLAN, bonded or bridge is not
important) which contains an IP from the same subnet.
So, to have 192.168.10.111 added to bo
On Wed, 20 Apr 2011 10:08:08 +0200, dlugi wrote:
> On Wed, 20 Apr 2011 14:58:49 +0700, "Fajar A. Nugraha"
> wrote:
>> On Wed, Apr 20, 2011 at 2:50 PM, dlugi
>> wrote:
>>> Is it possible to build some kind of HPC cluster where this single
>>> process
>>> could be distributed for several machi
On Mon, 18 Apr 2011 08:57:34 -0500, Terry wrote:
> On Mon, Apr 18, 2011 at 8:38 AM, Terry wrote:
>> On Mon, Apr 18, 2011 at 3:48 AM, Christine Caulfield
>> wrote:
>>> On 17/04/11 21:52, Terry wrote:
As a result of a strange situation where our licensing for storage
dropped off, I
On Mon, 29 Nov 2010 21:40:42 + (UTC), A. Gideon wrote
> On Fri, 26 Nov 2010 15:04:40 +, Colin Simpson wrote:
>
> >> but when I break the DRBD connection between two primary nodes,
> >> "disconnected" apparently means that the nodes both continue as if
> >> they've UpToDate disks. But this
Hi,
just my 0.02 below
On Mon, 22 Nov 2010 21:21:50 + (UTC), "A. Gideon"
wrote:
> On Sun, 21 Nov 2010 21:46:03 +, Colin Simpson wrote:
>
>
>> I suppose what I'm saying is that there is no real way to get a quorum
>> disk with DRBD. And basically it doesn't really gain you anything
>> w
Hi,
you should update /usr/share/cluster/cluster.rng to validate the new
options and your 'new' service/script
On Thu, 29 Jul 2010 14:18:58 -0500, Dustin Henry Offutt
wrote:
> Hello,
>
> Does anyone know how to force a cluster (the "Cluster Suite" as released
> with the RHEL5.4 ISO, cman, rgman
> real risk on physical servers too.
>
> I had better do some more testing.
>
> Thanks for the input.
>
> regards,
> Martin
>
>> -Original Message-
>> From: linux-cluster-boun...@redhat.com
> [mailto:linux-cluster-boun...@redhat.com]
>> On
Hi,
i can confirm, that time steps do cause reconfiguration. Not sure if this
was the reason, but one of my nodes was fenced from time to time
(previously) after several reconfigurations and also it caused some
problems with gfs being withdrawn.
ntpdate running as cron job does step changes, but
Hi,
On Sun, 04 Jul 2010 19:24:03 +0400, AlannY wrote:
> Hi there. I'm new in clustering, so I have a question.
>
> I have 2 nodes. For example, ONE and TWO. On ONE, I can export disk
block
> device
> (via GNBD or iSCSI) to TWO. On TWO, I can export another disk block
device
> and import
> it in
Hi,
On Mon, 21 Jun 2010 16:07:34 +0100, Jason Fitzpatrick
wrote:
> Hi all
>
> I am having no end of trouble getting a basic Active Active Cluster
> working. at the moment it is in test / proof of concept and has manual
> fencing in place but I cannot for the life of me get the 2 nodes to
> join t
On Mon, 21 Jun 2010 10:20:34 +0100, Gordan Bobic
wrote:
> On 06/21/2010 08:52 AM, Kaloyan Kovachev wrote:
>> On Fri, 18 Jun 2010 18:15:09 +0200, brem belguebli
>> wrote:
>>> How do you deal with fencing when the intersite interconnects (SAN and
>>>
On Fri, 18 Jun 2010 18:15:09 +0200, brem belguebli
wrote:
> How do you deal with fencing when the intersite interconnects (SAN and
> LAN) are the cause of the failure ?
>
GPRS or the good old modem over a phone line?
>
> 2010/6/18 Don Hoover :
>> Couldn't the geo cluster be most reliably solve
On Thu, 17 Jun 2010 14:59:59 -0500, Dustin Henry Offutt
wrote:
> Believe this issue has been resolved by altering
/usr/share/cluster/ip.sh.
>
> The resulting script has added new XML for a new "device" parameter.
>
> New variable 'device' is passed to the ip_op function and then to
functions
> i
On Tue, 20 Apr 2010 17:41:42 +, "Joseph L. Casale"
wrote:
>>One of the problems with clustering is the fencing barrier to entry when
>>shared data is at stake -- often it's high cost and not all hardware
>>vendors resell them.
>
> I'm surprised more people don't just fence with a managed swit
On Tue, 30 Mar 2010 21:28:46 +, "Joseph L. Casale"
wrote:
> Anyone know how I might accomplish keeping cron jobs on the active node?
> I realize I can create the job on all nodes such that it quietly checks
> for status, if it's the active node, it runs but it's much easier to
> maintain my co
On Mon, 29 Mar 2010 10:14:49 +0200, carlopmart
wrote:
> oops. Maybe the problem is that I need to put seconds like this:
> interval="1s" ... i
> will try it.
>
the (missing) seconds are not a problem, but you need to restart rgmanager
for it to reread the values
> wrote:
>> Hi.
>>
>> look
he node. But in my case - all nodes
> - can already see all LUNs - so I dont really have any need to do an
> iSCSI export - appreciate the thought though.
>
The idea was actually not to export them, but to run mdamd simultaneously on
both nodes. But the problem is when just one of the no
Hi,
i have similar setup, but with iscsi instead of GNBD and when the DRBD
devices are exported as fileio instead of blockio there are similar problems
caused from the read caching. Check if you have some caching from GNBD and
disable it. I don't think it is related to the UPS, but see below
On S
Hi,
On Wed, 03 Mar 2010 11:16:07 -0800, Michael @ Professional Edge LLC wrote
> Hail Linux Cluster gurus,
>
> I have researched myself into a corner and am looking for advice. I've
> never been a "clustered storage guy", so I apologize for the potentially
> naive set of questions. ( I am savv
On Fri, 26 Feb 2010 23:12:59 +, Joseph L. Casale wrote
> >Is the node successfully fenced? The service will not migrate before that and
> >if you don't have proper fencing that could be the reason
>
> Yes, this confirms what I thought. I did see that fencing was awry and I am
working
> on that
On Fri, 26 Feb 2010 00:09:25 +, Joseph L. Casale wrote
> Hi,
> Still testing around and learning the ins and outs of rhcs, I have an apache
> service with a mount etc configured that starts fine, migrates as well. If
> I drop the Ethernet interface on the node with the service active, the
> re
On Wed, 24 Feb 2010 09:55:21 -0500 (EST), Bob Peterson wrote
> - "Fabio M. Di Nitto" wrote:
> | -BEGIN PGP SIGNED MESSAGE-
> | Hash: SHA1
> |
> | The cluster team and its community are proud to announce the 3.0.8
> | stable release from the STABLE3 branch.
> |
> | This release contai
Hi Gordan,
On Tue, 23 Feb 2010 20:40:57 +, Gordan Bobic wrote
>
> I use it in active-active mode with GFS. In that case I just use the
> fencing agent in DRBD's "stonith" configuration so that when
> disconnection occurs, the failed node gets fenced.
>
I am also using it in active-active
On Mon, 22 Feb 2010 15:42:08 -0500 (EST), Leo Pleiman wrote
> Check this out...
>
>
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Configuration_Example_-_NFS_Over_GFS/index.html
>
> It provides nfs of the file system as a service to nodes outside the
cluster. NFS from one node
Hi,
On Sun, 21 Feb 2010 20:30:38 +, Joseph L. Casale wrote
> Hey,
> I am new to clusters, and have been reading up all the options but
> given I haven't any experience ever setting one up I don't know which
> option is best suited.
>
> I need to replicate a local volume between two servers, a
Hello,
when __independent_subtree is used and the resource fails the services is not
relocated after max_restarts. Is this a bug or is by design?
Example:
the idea here is if MySQL have crashed mysqld_safe will restart it, but if
there is a
Hi,
if the pid file contains more than 1 line (like sendmail) the
status_check_pid function returns an error. The attached patch replaces it with
'read pid' like it is done for stop_generic
ra-skelet.diff
Description: Binary data
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www
> >
> > Yes, it does work in active-active but DRBD people themselves don't
> > recommend running it in production active-active under cluster file
> > system, I quote from their website:
> > "DRBD's primary-primary mode with a shared disk file system (GFS,
> > OCFS2). These systems are very sensi
On Tue, 02 Feb 2010 07:47:42 +, yvette hirth wrote
> Dirk H. Schulz wrote:
>
> > What I do not understand at the moment: If you can afford to restrict
> > one of every blade's two interfaces to cluster communication, why don't
> > you put them into a VLAN (the real interfaces, not virtual on
On Fri, 29 Jan 2010 10:33:48 +0500, Zaeem Arshad wrote
> On Thu, Jan 28, 2010 at 3:44 PM, Gordan Bobic wrote:
>
> You have made interesting observations. Let me see if I can answer these.
>
> > Hmm... What sort of ping time do you get? I presume you have established
> > that it is on the sensibl
On Wed, 30 Dec 2009 08:32:56 -0500, michael.lense wrote
> Red Hat Linux-Clustering
>
>
> I am currently setting up a two node cluster for a Database Environment…
>
> I have Network Bonding setup on the two nodes and was reading in one
document that Red Hat uses eth0 as the default h
On Wed, 16 Dec 2009 01:02:19 +0100, Jakov Sosic wrote
> On Tue, 2009-12-15 at 19:51 +0100, Rafael [UTF-8?]MicГі Miranda wrote:
>
> >
[1]
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Logical_Volume_Manager_Administration/mirrored_volumes.html
> >
http://www.redhat.com/docs/en
Update
On Fri, 11 Dec 2009 14:32:39 +0200, Kaloyan Kovachev wrote
> On Fri, 11 Dec 2009 11:36:53 +, Christine Caulfield wrote
>
>
>
> >
> > Hmm. I am totally wrong. Very sorry. keyfile IS allowed in cluster3, it
> > overrides the one assigned in totem. In wh
On Fri, 11 Dec 2009 11:36:53 +, Christine Caulfield wrote
>
> Hmm. I am totally wrong. Very sorry. keyfile IS allowed in cluster3, it
> overrides the one assigned in totem. In which case I'm not sure why it's
> failing to validate on your system.
>
according to the validation file (clus
On Fri, 11 Dec 2009 10:24:49 +, Christine Caulfield wrote
> On 11/12/09 10:21, Kaloyan Kovachev wrote:
> > On Fri, 11 Dec 2009 09:58:38 +, Christine Caulfield wrote
> >> On 11/12/09 09:48, Kaloyan Kovachev wrote:
> >>> On Thu, 10 Dec 2009 16:12:33 +, Chris
On Fri, 11 Dec 2009 09:58:38 +, Christine Caulfield wrote
> On 11/12/09 09:48, Kaloyan Kovachev wrote:
> > On Thu, 10 Dec 2009 16:12:33 +, Christine Caulfield wrote
> >> On 10/12/09 15:27, Kaloyan Kovachev wrote:
> >>> Hello,
> >>>after upgr
On Thu, 10 Dec 2009 16:12:33 +, Christine Caulfield wrote
> On 10/12/09 15:27, Kaloyan Kovachev wrote:
> > Hello,
> > after upgrading to 3.0.6 i get:
> >
> > Starting cman... Relax-NG validity error : Extra element cman in interleave
> >
> > but cluster
Hello,
after upgrading to 3.0.6 i get:
Starting cman... Relax-NG validity error : Extra element cman in interleave
but cluster.conf should be correct and was working so far without problems.
The coresponding section in is:
how should i change it to pass the validity check?
--
Linux-clu
On Tue, 17 Nov 2009 12:07:00 +, Steven Whitehouse wrote
> Hi,
>
> On Tue, 2009-11-17 at 12:53 +0100, brem belguebli wrote:
> > I think the constraint is just like for regular filesystems.
> >
> > 1 GB should be right, shouldn't it ?
> >
> Well there are journals of 128M each (default) so for
On Tue, 17 Nov 2009 12:40:04 +0100, carlopmart wrote
> Steven Whitehouse wrote:
> > Hi,
> >
> > On Tue, 2009-11-17 at 12:32 +0100, carlopmart wrote:
> >> carlopmart wrote:
> >>> Hi all,
> >>>
> >>> Which is the minimal partition size that needs GFS2??
> >>>
> >>> Thanks.
> >>>
> >> Please, any hin
Hello,
yesterday the cluster died after a short network outage i guess and Node2
(which is the only one with single NIC without bonding) have been fenced, but
services not relocated and after rebooted was unable to mount the GFS2 shares
and i had to reboot the entire cluster.
It's a 4 node cluste
On Sat, 7 Nov 2009 00:11:39 -0200, Clбudio Santiago wrote
> I have encountered problem to execute some operations on gfs2 filesystem.
>
> # mount | grep gfs2
> /dev/mapper/vg-test on /mnt/test type gfs2
(rw,relatime,hostdata=jid=0,quota=on)
>
> # gfs2_tool journals /mnt/test
> Error mounting
On Fri, 30 Oct 2009 10:03:41 -0500, David Teigland wrote
> On Thu, Oct 29, 2009 at 07:13:04PM +0200, Kaloyan Kovachev wrote:
> > Hello,
> > i would like to have one specific node to always fence any other failed
> > node
> > and some nodes to never try to fence.
Hello,
i would like to have one specific node to always fence any other failed node
and some nodes to never try to fence. For example in 4 or 5 nodes cluster:
Node1 is fencing any other failed, Node2 and Node3 will try fencing some time
later (in case the failed node is Node1) and Node4/Node5 shou
On Wed, 28 Oct 2009 19:44:53 -0400, Madison Kelly wrote
> Branimir wrote:
> > Hi list ;)
> >
> > Well, here is my problem. I configured a few productions clusters myself
> > - mostly HP Proliant machines with ILO/ILO2. Now, I would like to do the
> > same thing but with ordinary PC hardware (the f
On Tue, 27 Oct 2009 14:32:39 -0500, David Teigland wrote
> On Tue, Oct 27, 2009 at 08:21:50PM +0100, Fabio M. Di Nitto wrote:
> > 2) if we use httpd only to distribute cluster.conf, then I?d like to see
> > "httploader" (or please find a better a name) being a wrapper for wget
> > rather than a bra
On Tue, 27 Oct 2009 14:32:39 -0500, David Teigland wrote
> On Tue, Oct 27, 2009 at 08:21:50PM +0100, Fabio M. Di Nitto wrote:
> > 2) if we use httpd only to distribute cluster.conf, then I?d like to see
> > "httploader" (or please find a better a name) being a wrapper for wget
> > rather than a bra
On Wed, 14 Oct 2009 17:50:34 +0200, carlopmart wrote
> Jan Friesse wrote:
> > Carlopmart,
> > it's not problem of fence_vmware_ng and definitively not malfunction.
> > Cluster is trying to keep all possible nodes going. So what will happend:
> > - You will boot node01
> > - This will start fencing,
Hi,
On Sat, 10 Oct 2009 15:41:33 -0400, Madison Kelly wrote
> Andrew A. Neuschwander wrote:
> > Madison Kelly wrote:
> >> Hi all,
> >>
> >> Until now, I've been building 2-node clusters using DRBD+LVM for the
> >> shared storage. I've been teaching myself clustering, so I don't have
> >> a wor
On Mon, 12 Oct 2009 11:14:08 +0100, Steven Whitehouse wrote
> Hi,
>
> On Mon, 2009-10-12 at 13:07 +0300, Kaloyan Kovachev wrote:
> > Hi,
> >
> > On Fri, 09 Oct 2009 18:01:36 +0100, Steven Whitehouse wrote
> > > Hi,
> > >
> > > The idea is
Hi,
On Fri, 09 Oct 2009 18:01:36 +0100, Steven Whitehouse wrote
> Hi,
>
> The idea is that is should be self-tuning now, adjusting itself to the
> conditions prevailing at the time. If there are any remaining
> performance issues though, we'd like to know so that they can be
> addressed,
>
I ha
On Fri, 18 Sep 2009 08:44:05 -0400 (EDT), Bob Peterson wrote
> - "Kaloyan Kovachev" wrote:
> | Hello,
> | gfs2_tool journals /mnt/GFS
> | returns
> | Error mounting GFS2 metafs: Block device required
> |
> | it gives the same message when i point to the di
Hello,
gfs2_tool journals /mnt/GFS
returns
Error mounting GFS2 metafs: Block device required
it gives the same message when i point to the disk itself no mater if it is
mounted or not ... is there something i am missing or there is a bug with the
tool?
--
Linux-cluster mailing list
Linux-cluste
On Tue, 1 Sep 2009 14:48:13 +0200, ESGLinux wrote
> 2009/9/1 Kaloyan Kovachev
> On Tue, 1 Sep 2009 14:21:47 +0200, ESGLinux wrote
>
> >
> >
> >
> >
> >
> > You should use one iscsi lun shared by both cluster nodes. You can mount a
> G
On Tue, 1 Sep 2009 14:21:47 +0200, ESGLinux wrote
>
>
>
>
>
> You should use one iscsi lun shared by both cluster nodes. You can mount a
GFS filesystem without locking (lock=nolock) with (correct me if I am wrong)
the node not being part of a cluster, but only in one node at a time.
> You
Hi,
On Tue, 1 Sep 2009 11:26:48 +0200, Jakov Sosic wrote
> On Mon, 31 Aug 2009 23:26:06 +0200
> "Marc - A. Dahlhaus" wrote:
>
> > I think your so called 'limitation' is more related to mistakes that
> > was made during the planing phase of your cluster setup than to
> > missing functionality.
>
On Mon, 13 Jul 2009 16:03:00 +0100, Bryn M. Reeves wrote
> On Mon, 2009-07-13 at 15:40 +0100, Bushby, Bruce (London)(c) wrote:
> >
> > Greetings!
> >
> > I'm hoping a member could assist me in clearing up some understanding
> > I appear to be missing when it comes to GNBD.
> >
> > Today I clu
Hi,
DRBD is what you would want to use here and will beter do the job. It is
possible to create software raid 1 over GNBD and local drive/partition, but
not recomended.
On Mon, 13 Jul 2009 15:40:28 +0100, Bushby, Bruce \(London\)\(c\) wrote
>
> Greetings!
>
> I'm hoping a member could ass
Hello list,
i am experimenting with a cluster based on RHCM, but to avoid single point of
failure i would like to have two storage machines, replicated via drbd. To be
able to access the data in case of failure of one of the storages there should
be multipath access to the data. So my question is:
85 matches
Mail list logo