Hi,
We have an 8 node cluster running SASgrid. We have the core components of
SAS under RHCS (rgmanager) control, but there are user/client jobs that are
initiated manually and by cron outside of RHCS. We have run into an issue a
few times where it seems that when the gfs init script is called
Ok, so which is better for a Red Hat cluster?
I prefer to do both. Since they are GFS I want them mounted on all nodes at
boot. So the entries are in /etc/fstab and /etc/init.d/gfs is enabled.
Then I configure the gfs resources and do not enable the force unmount. So
the cluster will
I have also encountered this problem several times, and although this list
seems to recommend running clvmd -R it has in fact never helped the
situation (I'm running centos 5.2). The only way I can solve this problem is
by rebooting all nodes in the cluster and then extending the lv.
We
I was doing test and when I force the node 1 to fail, the node 2 works
correctly and takes the services that i want, but suddently the node 1
restarts.
How did you force the failure? If the nodes unexpectedly lose communication
to each other via the cluster multicast, then fencing is going
I wasn't sure which list to, so I chose both cluster and lvm.
My current configuration:
2 Node RHEL 5.2 cluster with multiple GFS on top of logical volumes in one
volume group.
# rpm -q cman lvm2 lvm2-cluster kmod-gfs
cman-2.0.84-2.el5
lvm2-2.02.32-4.el5
lvm2-cluster-2.02.32-4.el5
Hi,
I was reading through a write up on fast_statsf,
http://people.redhat.com/rpeterso/Patches/GFS/readme.gfs_fast_statfs.R4. In
the explanation of how the code works in 2.6 states:
2.6 Local change (delta) is synced to disk whenever quota daemon is
waked up and the (a tunable, default to 5
Hi,
I noticed the following messages when starting services.
Jul 24 10:05:09 lxomt04e in.rdiscd[3763]: 224.0.0.2 rdisc
Statistics
Jul 24 10:05:09 lxomt04e in.rdiscd[3763]: 3 packets transmitted,
Jul 24 10:05:09 lxomt04e in.rdiscd[3763]: 0 packets received,
Jul 24 10:05:09 lxomt04e
.conf.default.force_igmp_version = 2
net.ipv4.conf.all.force_igmp_version = 2
-Jeremy
On Thu, May 29, 2008 at 12:36 PM, Jeremy Lyon [EMAIL PROTECTED] wrote:
I'm having the exact same issue on a RHEL 5.2 system, and have a open
support case with Redhat. When it will be resolved i can post the details
Any word
Hi,
I just noticed that in RHEL 4 clustat could be run by any user and now in
RHEL 5 it requires root. Was this done on purpose or is it a by product of
the changes of cluster from v1 - v2? Is there anything that can be done to
allow a user to run clustat without sudo. I don't think I want to
Hi,
We noticed today that if we manually remove an IP via ip a del IP/32 dev
bond0 that the service does not detect this and does not cause a fail over.
Shouldn't the service be statusing the IP resource to make sure it is
configured and up? We do have the monitor link option enabled. This is
I'm having the exact same issue on a RHEL 5.2 system, and have a open
support case with Redhat. When it will be resolved i can post the details
Any word on this? I think I may get my own case going. Do you know if a
bugzilla got assigned to this?
Thanks!
Jeremy
--
Linux-cluster mailing
Hi,
I'm running Cluster 2 on RHEL 5.2 (I saw this behavior on 5.1 and updated
just yesterday to see if it fixed it, but no luck) and I'm seeing issues
when I reboot a node. I tried increasing the post_join_delay to 60 and the
totem token to 25000, but nothing seems to be working.
During the
Hi,
I'm currently using cluster on RHEL 4.6 and will be soon using moving to
cluster on RHEL 5.1. We are using some script resources and I'm trying to
find if there are timeouts on the start, stop and status functions. If so,
what are the defaults and can they be tuned?
TIA
Jeremy
--
Yes, unless you specify __independent_subtree for a resource, then it
and its children are independent and attempted to be restarted as a
separate operation from a full service restart. Only if that partial
restart fails in this case does a full service restart occur.
I'm not seeing this
14 matches
Mail list logo