I was testing fence_ipmilan on RHEL 7 cluster and noticed that running the
fence agent with the option to power off the remote node, it appears to
cleanly stop the remote node instead of removing power immediately. I
suspect something like ACPI intercepting the power off and trying to stop
RHEL 7
Just wanted to follow up. I was able to further test. Using the
"acpi=off" on the kernel line appears to have eliminated the timing issue I
was seeing on my hardware setup.
Thanks
Robert
On Fri, Mar 18, 2016 at 8:24 AM, Robert Hayden
wrote:
> I was testing fence_ipmilan on R
On Thu, Jan 24, 2013 at 11:28 AM, Robert Hayden
wrote:
> On Tue, Jan 22, 2013 at 12:38 PM, Fabio M. Di Nitto
> wrote:
>>
>> On 01/22/2013 06:22 PM, Robert Hayden wrote:
>> > I am testing RHCS 6.3 and found that the self_fence option for a file
>> > system
On Tue, Jan 22, 2013 at 12:38 PM, Fabio M. Di Nitto wrote:
>
> On 01/22/2013 06:22 PM, Robert Hayden wrote:
> > I am testing RHCS 6.3 and found that the self_fence option for a file
> > system resource will now longer function as expected. Before I log an
> > SR with RH
I believe the document you are following is for RHEL 4.
The packages I typically pull are as follows. They will pull in others as
needed.
# RHCS Specific Packages
#cman.x86_64
#openais.x86_64
#lvm2-cluster.x86_64
#gfs2-utils.x86_64
#rgmanager.x86_64
#system-config
Looking to see if anyone has worked with creating a multi-node RHCS cluster
comprised of virtual machines in a large KVM pool. I am investigating the
potential of creating a KVM pool that will consist of 10s of physical
machines that will provide 100s of VMs. Some of the VMs need to have HA
confi
> On 09/27/2011 05:33 PM, Ruben Sajnovetzky wrote:
> >
> > I might be doing something wrong, because you say "you are fine" but
> didn't
> > work :(
> >
> > All servers have "/opt/app" mounted in same internal disk partition.
> > They are not shared, it is just that all have identical layout.
> > I
You might try to add the multicast stanza inside the stanza as
well. You can specify an specific interface as well.
For example,
I search the openais forums and ran across two recent threads and a couple
of potential patches that sounds interesting. Unfortunately, I do not have
enough experience to determine if it is related to my issue.
"[Openais] Problems forming cluster on corosync startup" at
http://marc.info/?l=opena
ng
On Fri, Sep 2, 2011 at 8:38 AM, Robert Hayden wrote:
> Has anyone experienced the following error/hang/loop when attempting
> to stop rgmanager or cman on the last node of a two node cluster?
>
> groupd[4909]: cpg_leave error retrying
>
> Basic scenario:
> RHEL 5.7
Has anyone experienced the following error/hang/loop when attempting
to stop rgmanager or cman on the last node of a two node cluster?
groupd[4909]: cpg_leave error retrying
Basic scenario:
RHEL 5.7 with the latest errata for cman.
Create a two node cluster with qdisk and higher totem token=7
I was attempting to add VM resources to a two node cluster with the
ccs tool (RHEL 6.1). I believe that it I am either not using the
proper ccs command or there is a bug in the ccs tool for VMs. Wanted
to see if anyone has attempted this before I go to bugzilla.
Command:
ccs -f cluster.b
On Sat, May 14, 2011 at 2:21 PM, Sufyan Khan wrote:
>
> Yes , you can see in attached script
I can very well be miss reading the script, but with the status
function, you are returning a "0" or a "1" appropriately, but I am not
sure that return value is the return value for the script_db.sh.
Isn'
I believe you will want to investigate the "clean_start" property in the
fence_daemon stanza (RHEL 5). Unsure if it is in RHEL6/Cluster3 code. It
is my understanding that the property can be used to by-pass the timeout and
remote fencing on initial startup. This assumes you know that the remote
I have searched for a concrete example of RHCS in a pure IPv6 environment,
but I have only found references that IPv6 is supported.
Does anyone have experience with setting up RHCS with IPv6 that they would
be willing to share? Any good, technical papers out there? In particular,
I would like to
Looking for guidelines on when RHCS components can be upgraded in a rolling
fashion and when it is best to simply take a full cluster downtime. I am
looking at 2-6 node clusters with each node providing a unique set of
functions along with common functions. Each node has a dedicated failover
node
16 matches
Mail list logo