While doing a yum update on one RHCS node I got this:
Transaction Check Error: file /etc/depmod.d/gfs2.conf from install of
kmod-gfs2-1.92-1.1.el5 conflicts with file from package
kmod-gfs2-1.52-1.16.el5
This node does not use GFS2 (and I already unmounted any GFS1 volumes
anyway), so I
During the update to RHEL5.2 I had another problem.
The mkinitrd process would hang indefinitely while scanning my block
devices with
lvm.static lvs --ignorelockingfailure --noheadings -o vg_name blockdev
stracing the lvs process showed no signs of life. It hung on random
function calls (I
Hi list,
Sorry for the noise, but I thought posting my results with the update to
RHEL5.2 would be of interest to a few of you.
The node that has been updated to RHEL5.2 seems to operate very nicely
in the cluster so far. After a boot it did rejoin the cluster, and got
back its affinity
Mag Gam wrote:
Hello:
I am planning to implement GFS for my university as a summer project.
I have 10 servers each with SAN disks attached. I will be reading and
writing many files for professor's research projects. Each file can be
anywhere from 1k to 120GB (fluid dynamic research images).
Hello:
I am planning to implement GFS for my university as a summer project. I have
10 servers each with SAN disks attached. I will be reading and writing many
files for professor's research projects. Each file can be anywhere from 1k
to 120GB (fluid dynamic research images). The 10 servers will
Hello List,
Recently we have been looking at replacing our NFS server with a SAN in our
(relatively small) webserver cluster. We decided to go with the Dell
MD3000i, an iSCSI SAN. Right now I have it for testing purposes and I'm
trying to set up a simple cluster to get more experience with it. At
Hello everybody
I have the following test setup:
- RHEL 5.1 Cluster Suite with rgmanager-2.0.31-1 and cman-2.0.73-1
- Two VMware machines on an ESX 3.5 U1, so no fence device (it's only a test)
- 4 IP resources defined
- GFS over DRBD, doesn't matter, because it doesn't even work on a local
If you bond between two different switches, you'll only be able to do
failover between the NICs. If you use multipath, you can round-robin
between them to provide a greater bandwidth overhead.
Same goes for bonding: link aggregation with active-active bonding.
active-active bonding across
I'm having the exact same issue on a RHEL 5.2 system, and have a open
support case with Redhat. When it will be resolved i can post the details
Any word on this? I think I may get my own case going. Do you know if a
bugzilla got assigned to this?
Thanks!
Jeremy
--
Linux-cluster mailing