Hello,
Just a tip. Though obviously I do not know your exact FireWire setup, I
ended up with Centos 5 and kernel 2.6.18-92.1.6.el5.centos.plus were
firewire works perfectly especially for TCP/IP over Ether over Firewire.
Sincerely,
T.K.
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROT
Hi,
We noticed today that if we manually remove an IP via ip a del /32 dev
bond0 that the service does not detect this and does not cause a fail over.
Shouldn't the service be statusing the IP resource to make sure it is
configured and up? We do have the monitor link option enabled. This is
clus
My setup sounds similar to yours but with a SAN for all the underlying
storage.
I have a large FC SAN (might be cost prohibitive for you), and three
physical (Dell PE1500s) servers. Two of them are running ESX 3.5 and one
is running CentOS. The ESX Servers share a chunk of SAN using VMFS3. The
Yeah, similar question to the first responder ... Is your intent to have
shared disk space between all the ESX servers? To support live migrations,
etc? If so, then ESX server has a built-in filesystem called vmfs, which
can be shared by all the servers in the farm to store VM images, etc. We
us
I am running a home-brew NAS Cluster for a medium sized ISP. It is run with a
pair of Dell PowerEdge 2900 with 1 Terabyte of filesystem exported via NFS to 4
nodes running apache, exim and imap/pop3 services. Filesystem is made on top of
drbd in a active/backup setup with heartbeat. Performance
Gerhard Spiegl wrote:
Hi all,
I'm working on a two node cluster (RHEL 5.2 + RHCS) with one
XEN virtual machine per node:
node1 => VM1
node2 => VM2
When node1 takes over VM2 via the command:
clusvcadm -M vm:VM2 -m node1
node2 gets fenced after takeover is done, which is probably expected beha
Hi,
Thats an ancient version of GFS2, please use something more recent such
as the current Fedora kernel,
Steve.
On Tue, 2008-07-01 at 09:51 -0400, Ernie Graeler wrote:
> All,
>
>
>
> I’m new to this list, so I’m not sure if any one else has encountered
> this problem. Also, this is my firs
All,
I'm new to this list, so I'm not sure if any one else has encountered
this problem. Also, this is my first post so forgive me if I do
something incorrect. :-) I've created a cluster using 2 nodes and
created a shared file system between them using gfs2. So far, the set
up seemed to go
Hi all,
I'm working on a two node cluster (RHEL 5.2 + RHCS) with one
XEN virtual machine per node:
node1 => VM1
node2 => VM2
When node1 takes over VM2 via the command:
clusvcadm -M vm:VM2 -m node1
node2 gets fenced after takeover is done, which is probably expected behaviour.
As node2 comes u
Hello All,
I need your help for one issue i am facing .
OS: RHEL4 ES Update 6 64bit
I have a deployment where we have 2 + 1 cluster (2 active and one
passive). I have a service which is to be failed over but faced issues
when i rebooted all 3 servers. Services got disabled. But when i use
clusvs
Hello,
This is our setup: We have 3 Linux servers (2.6.18 Centos 5), clustered,
with a clvmd running one big volume group (15 SCSI disks a 69,9 GB).
After we got an hardware I/O error on one disk out gfs filesystem began to
loop.
So we stopped all services and we determined the corrupted disk
11 matches
Mail list logo