You also need to change
DEVICE=eth0
to
DEVICE=eth0:1
in /etc/sysconfig/network-scripts/ifcfg-eth0:1
HTH
S
From: linux-cluster-boun...@redhat.com [linux-cluster-boun...@redhat.com] On
Behalf Of sunhux G [sun...@gmail.com]
Sent: 16 January 2009 16:21
To: linux c
Hi all,
We are in a bit of a jam with our new 6 node GFS cluster which is performing at
a quarter of the speed of our old NFS system. Understandably RedHat have
refused to help us, so I wondered if anybody out there would be willing to help
us for a suitable hourly rate. I think about 40 hours
Hi,
A few weeks ago I started to setup a 6 node GFS cluster connected to a SAN with
Fiber HBAs. Each node is connected to 2 gigabit switches for redundancy.
Up until a few days ago things were going very well, but intensive testing
showed that we have a bit of a performance problem.
Perhaps I
fiber HBAs.
Are these switches suitable for this application?
TIA for any help...
And many thanks for the help already given
Shaun
-Original Message-
From: Shaun Mccullagh
Sent: Wednesday, December 03, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] Unexpected problems
-cluster] Unexpected problems with clvmd
Shaun Mccullagh wrote:
> Hi,
>
> I tried to add another node to our 3 node cluster this morning.
>
> Initially things went well; but I wanted to check the new node booted
> correctly.
>
> After the second reboot clvmd failed t
Hi,
I tried to add another node to our 3 node cluster this morning.
Initially things went well; but I wanted to check the new node booted
correctly.
After the second reboot clvmd failed to start up on the new node (called
pan4):
[EMAIL PROTECTED] ~]# clvmd -d1 -T20
CLVMD[8e1e8300]: Dec 3 14:24
Hi,
Is this entry valid in fstab for two GFS filesystems?
/dev/vg_gfs/main_sites /san/main_sites gfs noauto0
0
/dev/vg_gfs/main_data /san/main_data gfs noauto0
0
If I exec mount /san/main_data and mount /san/main_sites
these commands work fine, both
...
Shaun
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Shaun Mccullagh
Sent: dinsdag 2 december 2008 15:34
To: linux clustering
Subject: [Linux-cluster] ccsd : Unable to bind to socket on rhel 5.2
Hi,
I'm trying to get cman to start up on one o
Hi,
I'm trying to get cman to start up on one of the nodes in a 3 node
cluster.
2 nodes are running fine, but for the last one ccsd reports:
ccsd[24647]: Unable to bind to socket.
Cluster.conf look like this:
...
Many thanks for the clear info.
I've installed kmod-gfs and cman-2.0.84. I notice that kmod gfs2 is loaded when
I start service cman.
I see that gfs2.ko is part of kernel-2.6.18-92.1.18.el5, which is the kernel in
use on the system.
Is this expected behaviour?
When I exec service gfs start
Hi,
I'm setting a GFS cluster on CentOS v5.2 with latest rpms.
I think gfs2 is still at the Technology Preview level.
Does this mean I should use gfs in production at the moment?
Thanks
Shaun
Op dit e-mailbericht is een disclaimer van toepassing, welke te vinden is op
http://www.espritxb
Hi,
I'm presently setting up an 8 node GFS cluster.
All nodes bar one will mount two GFS files systems from a SAN.
We will used lvm2-cluster on all nodes. All the GFS partitions will use
LVM.
I've defined locking_type = 2 and locking_library =
"liblvm2clusterlock.so" in /etc/lvm/lvm.conf on all
12 matches
Mail list logo