On 02/10/2011 08:52 AM, Andrew Beekhof wrote:
2011/1/31 José Luis Rodríguez García jose.l.rodrig...@tecnocom.es:
Is pacemaker compatible with Solaris 10 x64 using IPV6?
Never tried it, but if you can get corosync or heartbeat running
there, pacemaker should work just fine.
Last I checked
On: Thu, 10 Feb 2011 08:51:01 +0100, Andrew Beekhof wrote:
On Wed, Feb 9, 2011 at 2:48 PM, Stephan-Frank Henry frank.he...@gmx.net
wrote:
Hello agian,
after fixing up my VirtualIP problem, I have been doing some Split Brain
tests and while everything 'returns to normal', it is not quite
Original-Nachricht
On: Thu, 10 Feb 2011 08:55:56 +0100, Florian Haas wrote: On 02/09/2011 02:48
PM, Stephan-Frank Henry wrote:
Hello agian,
after fixing up my VirtualIP problem, I have been doing some Split Brain
tests and while everything 'returns to normal', it is
On Thu, Feb 10, 2011 at 9:09 AM, Stephan-Frank Henry
frank.he...@gmx.net wrote:
On: Thu, 10 Feb 2011 08:51:01 +0100, Andrew Beekhof wrote:
On Wed, Feb 9, 2011 at 2:48 PM, Stephan-Frank Henry frank.he...@gmx.net
wrote:
Hello agian,
after fixing up my VirtualIP problem, I have been doing
On: Thu, 10 Feb 2011 09:25:22 +0100, Andrew Beekhof wrote:
On Thu, Feb 10, 2011 at 9:09 AM, Stephan-Frank Henry
frank.he...@gmx.net wrote:
You forgot
0) Configure stonith
If data is being written to both sides, one of the sets is always
going to be lost.
Agreed and acceptable,
On Tue, Feb 8, 2011 at 3:42 AM, Bob Schatz bsch...@yahoo.com wrote:
I am running Pacemaker 1.0.9.1 and Heartbeat 3.0.3.
I have a master/slave resource with an agent.
When the resource hangs while doing a promote, the resource returns
OCF_ERR_GENERIC.
However, all this does is call demote
Hi Horms,
On Thu, Feb 10, 2011 at 08:51:39AM +0900, Simon Horman wrote:
Hi Pacemaker upstream people,
could someone comment on this bug report.
The bug report can be seen at http://bugs.debian.org/612682
CCing 612...@bugs.debian.org should append any responses
to the but report.
Hi,
does anybody have an idea and an example configuration for ordering the fence
resource.
I have the situation and problem, meatware fencing is used before ILO fencing.
So the duration to fence the through the ILO takes about 60 seconds after the
meatware fencing timed out.
Meatware should
Hi,
On Thu, Feb 10, 2011 at 02:10:00PM +0100, Johannes Freygner wrote:
Hi,
does anybody have an idea and an example configuration for ordering the fence
resource.
I have the situation and problem, meatware fencing is used before ILO
fencing. So the duration to fence the through the ILO
Thanks Andrew.
Yes, cibadmin -Ql works, but cibadmin -Q not.
What is DC?
And here is the logs.
Feb 10 08:57:30 arsvr1 cibadmin: [4264]: info: Invoked: cibadmin -Ql
Feb 10 08:57:32 arsvr1 cibadmin: [4265]: info: Invoked: cibadmin -Q
Feb 10 08:58:04 arsvr1 crmd: [960]: info:
That's it. It works fine,
Thank you
Hannes
-Original Message-
From: Dejan Muhamedagic [mailto:deja...@fastmail.fm]
Sent: Donnerstag, 10. Februar 2011 14:17
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] Define fence resource order
Hi,
On Thu, Feb 10, 2011 at
Hello again,
on quick question.
In many examples I see rsc_colocation variants with keys 'rsc', 'with-rsc' and
'with-rsc-role'.
Yet when I use them it get smacked by crm_verify:
cib.xml:102: element rsc_colocation: validity error : Element rsc_colocation
does not carry attribute to
Hi,
Now I took one node off by /etc/init.d/heartbeat stop.
With one node arsvr1 online, heartbeat tries to respan crmd, but ends with an
error code 2.
Here are the logs:
Feb 10 16:37:10 arsvr1 crmd: [5251]: info: do_state_transition: State
transition S_STARTING - S_PENDING [ input=I_PENDING
The logs look not in right format, let me try one more time.
Now I took one node off by /etc/init.d/heartbeat stop.
With one node arsvr1 online, heartbeat tries to respawn crmd, but ends with an
error code 2.
Here are the logs:
Feb 10 16:37:10 arsvr1 crmd: [5251]: info: do_state_transition:
On Wed, Feb 09, 2011 at 02:48:52PM +0100, Stephan-Frank Henry wrote:
Hello agian,
after fixing up my VirtualIP problem, I have been doing some Split
Brain tests and while everything 'returns to normal', it is not quite
what I had desired.
My scenario:
Acive/Passive 2 node cluster
Afternoon all,
We're cutting over from OpenSUSE and straight heartbeat based on ext3 (two
node active passive) to SLES, Pacemaker / Corosync, and OCFS2 in a split
role active/passive configuration (three databases, two on one server and
one on the other which can fail over to each other).
As
I won't talk about other parts but approach for pgsql configuration is
incorrect. You shouldn't create a new RA for each of your instances
like it seems you are trying to do:
primitive PGSQL_DEPOT ocf:heartbeat:pgsql.depot \
instead you should you different set of meta attributes for each of
17 matches
Mail list logo