Thanks Hugo
Your gratefully
On Fri, Jan 7, 2011 at 11:09 AM, Hugo Lombard wrote:
> On Fri, Jan 07, 2011 at 10:12:16AM +0530, Parvez Shaikh wrote:
>>
>>
>>
>>
>> > name="chassis_fence"/
On Fri, Jan 07, 2011 at 10:12:16AM +0530, Parvez Shaikh wrote:
>
>
>
>
> name="chassis_fence"/>
>
>
>
>
> Here fo
Hi Ben
Thanks a ton for below information. But I have doubt on cluster.conf
file snippet below -
Here for "node1
Thanks for responding, Joel. It turned out to be a VLAN change which required a
change with the bonded NIC. Immediately after issuing
ifenslave -c bond-hb eth2
on the problem node, it joined the cluster and the OCFS volumes could be
mounted.
-Original Message-
From: linux-cluster-bo
So I just started setting up a RHEL6 box for use in a load balanced
cluster and have run across a problem. The way you set up a virtual IP
on the back end realhost side is to add an interface alias to the
loopback device (such as lo:0). Well the ifup-eth script in RHEL6
refuses to add aliases t
On Thu, Jan 06, 2011 at 12:01:32PM -0500, Morrison, Bradley A wrote:
> Q: Can I reboot one node in a two-node cluster and have it rejoin the cluster?
You certainly should be able to. You should not need a reboot
either if you just want to rejoin.
> I've a two-node cluster which recently
We'd need to know how your storage is configured.
Everything from how storage is connected to the system (if it's
external) or if you're using an internal RAID controller. What sort of
mirroring are you using, how many locally connected HDD drives do you
have, how are the clustered file system
On 01/06/2011 03:07 PM, Luis Cebamanos wrote:
> It is not only data, it is like the system were booting from a 4 years
> time configuration as everything that we have done, included other kind
> of configuration files, just disappeared!!!
I apologize now for being blunt;
None of that matters. Wha
It is not only data, it is like the system were booting from a 4 years
time configuration as everything that we have done, included other kind
of configuration files, just disappeared!!!
On 01/06/2011 07:44 PM, Digimer wrote:
On 01/06/2011 02:36 PM, Luis Cebamanos wrote:
Is a cluster with 16
On 01/06/2011 02:51 PM, Luis Cebamanos wrote:
> Well, there are not backups of that valuable lost data but the cluster
> was using disk mirroring but no one has a clue of how to take advantage
> of that. Nobody here is a cluster expert and that has been the problem I
> guess.
> We haven't really "t
Luis,
You still haven't provided the relevant details of your configuration.
The df output you provided isn't relevant in terms of the data recovery.
You haven't even mentioned what file system the lost data was on.
You mention mirroring - what was doing the mirroring? Hardware RAID?
Softwar
On Fri, Jan 7, 2011 at 2:36 AM, Luis Cebamanos wrote:
> Is a cluster with 16 nodes and I suspect the problem is in the head node:
> We were trying to install new hard drives to the system but something that
> we don't know went wrong and it ended up in almost 4 years of work lost!!!
> Please, let
Well, there are not backups of that valuable lost data but the cluster
was using disk mirroring but no one has a clue of how to take advantage
of that. Nobody here is a cluster expert and that has been the problem I
guess.
We haven't really "touch" any important system file, after physically
in
On 01/06/2011 02:36 PM, Luis Cebamanos wrote:
> Is a cluster with 16 nodes and I suspect the problem is in the head node:
> $cat /proc/version
> Linux version 2.6.11.4-21.11-smp (ge...@buildhost) (gcc version 3.3.5
> 20050117 (prerelease) (SUSE Linux)) #1 SMP Thu Feb 2 20:54:26 GMT 2006
>
> df -T
On Thu, 06 Jan 2011 13:51:21 -0500, Digimer wrote:
On 01/06/2011 01:00 PM, Hofmeister, James (WTEC Linux) wrote:
umount gfs
service rgmanager stop
service gfs stop
service clvmd stop
service cman stop
This is my method. However, note that stopping GFS unmounts the
volumes,
so you can skip t
Is a cluster with 16 nodes and I suspect the problem is in the head node:
$cat /proc/version
Linux version 2.6.11.4-21.11-smp (ge...@buildhost) (gcc version 3.3.5
20050117 (prerelease) (SUSE Linux)) #1 SMP Thu Feb 2 20:54:26 GMT 2006
$cat /proc/cpuinfo
processor: 0
vendor_id: AuthenticA
Digimer wrote:
(snippage)
In fact, it's a benefit because, last I checked, snapshot'ing of
clvm was not possible.
and it still isn't. i tried to slapshot a gfs2 volume and it refused,
which makes tar'ing a gfs2 directory - without getting "source volume
changed during processing" messages -
On 01/06/2011 01:00 PM, Hofmeister, James (WTEC Linux) wrote:
> umount gfs
> service rgmanager stop
> service gfs stop
> service clvmd stop
> service cman stop
This is my method. However, note that stopping GFS unmounts the volumes,
so you can skip the manual unmount.
--
Digimer
E-Mail: digi...@
What is the current recommendation concerning shutting down a cluster node?
Is it acceptable to use shutdown or init to reboot an active/running cluster
node?
I have reviewed the RHCS admin guide and it does not state to *not* use
shutdown or init.
RHCS RH436 training manual (page 305) says th
Q: Can I reboot one node in a two-node cluster and have it rejoin the cluster?
I've a two-node cluster which recently had HBAs replaced on both cluster nodes.
Node 1 was ejected sometime after its latest reboot, and now won't mount its
OCFS volumes. The volumes' headers are verified from n1, i.e.
To address:
As per my understanding, IP address is IP address of management module
of IBM blade center, login/password represent credentials to access
the same.
>> Correct.
However did not get the parameter 'Blade'. How does it play role in
fencing?
>> If I recall correctly the blade= is the id
Hi all,
>From RHCS documentation, I could see that bladecenter is one of the
fence devices -
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Cluster_Administration/ap-fence-device-param-CA.html
Table B.9. IBM Blade Center
Field Description
NameA name for the IBM BladeCente
On 1/6/2011 9:02 AM, Parvez Shaikh wrote:
> Thanks Fabio
>
> Is this version same as what can be referred as version of "Red Hat
> Cluster Suite"?
>
> The reason I am asking is, as a part of RHCS there are various
> components (Cluster_Administration-en-US, cluster-cim, cluster-snmp,
> cman, rgma
Thanks Fabio
Is this version same as what can be referred as version of "Red Hat
Cluster Suite"?
The reason I am asking is, as a part of RHCS there are various
components (Cluster_Administration-en-US, cluster-cim, cluster-snmp,
cman, rgmanager, luci, ricci etc etc) and each of which shows its ow
24 matches
Mail list logo