erd: startup succeeded => CMAN up and
running
I looked for the FAQ you talked about but nothing, if you can post it when you
have time ;)
Jean-Daniel BONNETOT
-Message d'origine-
De : linux-cluster-boun...@redhat.com [mailto:linux-cluster-boun...@redhat.com]
De la part de Alvaro
Jean,
I too suffered the same issue, opened a case with support, etc. The best option
running ntpd and RHCS are:
-First, start the cman, rgmanager, etc. (I mean, all the RHCS daemons) always
after ntpd startup. In RHEL5 at least the default is the other way around.
You can do that if you disa
ive/active and multipath
seems to treat them as active/passive, but I guess this is for another mailing
list.
Raúl
From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com] On Behalf Of Alvaro Jose Fernandez
Sent: Wednesday, June 15, 2011 1:15 PM
To: linux clusteri
Hi,
DOC-35489 only partionally approaches the problem. I have it too, on a
passive/active IBM DS4000 array and RHEL5.5. I've excluded from lvm.conf any
SAN partitions as per the note (and also made a new initrd boot, as lvm.conf is
included at boot time as the / partition I have it LVM'ed) ,
Hi,
Do using a post_fail_delay > 0, when triggered, blocks running resources
on the node, if one is not using GFS? . For example, if one only uses a
couple of fs resources locally mounted in HA configuration, not shared
filesystems at all.
Regards,
alvaro
--
Linux-cluster m
Hadn't to...for example look at the scenario where there is a separation
between the heartbeat/fencing network, and the application/public
network.
In this case it's seems logical to me that clusternode name should point
to private names (ie, call them "nodexxx-hb" or "nodexxx-private" , etc.
Un
Hi,
No, it's not mandatory to use RAC. You can use Cluster Suite HA services
to provide a cold failover cluster for Oracle database, and this is
supported by both Oracle and Redhat as a valid deployment. You can
install Oracle Standard edition or Enterprise editions for that.
Check Lon note
Gianluca,
I thought that the sequence when both nodes are down and one starts was:
a) Fence daemon notices that the other node is down
(with status option of the fence command)
b) Fence daemon waits for the configured amount of time, based on
cluster.conf values or default ones, to "see" the other
Hi,
I have the same situation (two_node=1, RHEL5.5, no quorum disk), but it works
nice for me. Having both nodes down, starting one node always sucessfully fence
the other and this is the expected as Fabio said.
In my scenario the fenced node must remain down, even when sucessfully fenced
by
Hi,
Given fencing is properly configured, I think the default boot/sshutdown
RHCS scripts should work. I too use two_node (but no clvmd) in RHEL5.5
with latest updates to cman and rgmanager, and a shutdown -r works well
(and a shutdown -h too). The other node cluster daemon should log this
as a
Hi
There was recently a thread about evictions due to multipath timeouts
with qdisk, it's in the " Re: [Linux-cluster] RHCS Multipath / Fence"
thread. The pointed RHN docs aids over suggested timings in
multipath.conf to try avoid these issues, perhaps it would help.
regards,
alvaro
-Mens
Many thanks for the advice, Digimer.
regards.
> I know it would be desirable to have two devices for a fully redundant
> configuration, but after reading some examples from the docs (they are
> meant for two power switch), I still cannot understand why a single
> power switch connected to both s
Thanks for the tip, Jakov.
regards,
alvaro
> ¿any experiences over this issue?
It's because you still have SPOF. In this case, SPOF is the electronic
module of the powerswitch, so, if the electronics go down, there's no
way to fence the node. It would be better to have for example iDRAC or
IPMI
Hi,
I would like to know about wheter it would suffice for a two-node RHCS cluster
a single power switch (APC 7921) fencing device. The power switch has 8 power
outlets and I intend to use four of them for each node's dual power supplies.
I know it would be desirable to have two devices
14 matches
Mail list logo