> On 19 Apr 2015, at 11:37 pm, Andrei Borzenkov wrote:
>
> В Sun, 19 Apr 2015 14:23:27 +0200
> Andreas Kurz пишет:
>
>> On 2015-04-17 12:36, Thomas Manninger wrote:
>>> Hi list,
>>>
>>> i have a pacemaker/corosync2 setup with 4 nodes, stonith configured over
>>> ipmi interface.
>>>
>>> My pr
В Sun, 19 Apr 2015 14:23:27 +0200
Andreas Kurz пишет:
> On 2015-04-17 12:36, Thomas Manninger wrote:
> > Hi list,
> >
> > i have a pacemaker/corosync2 setup with 4 nodes, stonith configured over
> > ipmi interface.
> >
> > My problem is, that sometimes, a wrong node is stonithed.
> > As examp
On 2015-04-17 12:36, Thomas Manninger wrote:
> Hi list,
>
> i have a pacemaker/corosync2 setup with 4 nodes, stonith configured over
> ipmi interface.
>
> My problem is, that sometimes, a wrong node is stonithed.
> As example:
> I have 4 servers: node1, node2, node3, node4
>
> I start a hardw
for information: i am using pacemaker 1.1.12 on debian wheezy
-Ursprüngliche Nachricht-
Gesendet: Freitag, 17 April 2015 um 12:36:00 Uhr
Von: "Thomas Manninger"
An: pacemaker@oss.clusterlabs.org
Betreff: [Pacemaker] stonith
Hi list,
i have a pacemaker/corosync2 setup wi
Hi list,
i have a pacemaker/corosync2 setup with 4 nodes, stonith configured over ipmi interface.
My problem is, that sometimes, a wrong node is stonithed.
As example:
I have 4 servers: node1, node2, node3, node4
I start a hardware- reset on node node1, but node1 and node3 will be sto
> On 5 Nov 2014, at 9:39 am, Alex Samad - Yieldbroker
> wrote:
>
>
>
>
>> I read to mean that demorp2 killed this node >>> Nov 4 23:21:37
>> demorp1 corosync[23415]: cman killed by node 2 because we were killed by
>> cman_tool or other application
>
> Nov 4 23:21:37 demorp1
> -Original Message-
> From: Digimer [mailto:li...@alteeve.ca]
> Sent: Wednesday, 5 November 2014 8:54 AM
> To: Alex Samad - Yieldbroker; Andrei Borzenkov
> Cc: The Pacemaker cluster resource manager
> Subject: Re: [Pacemaker] stonith q
>
> On 04/11/14 02:45 PM,
On 04/11/14 02:45 PM, Alex Samad - Yieldbroker wrote:
{snip}
Any pointers to a frame work somewhere ?
I do not think there is any formal stonith agent developers guide;
take at any existing agent like external/ipmi and modify to suite your
needs.
Does fenced have any handlers, I notice it
{snip}
> >> Any pointers to a frame work somewhere ?
> >
> > I do not think there is any formal stonith agent developers guide;
> > take at any existing agent like external/ipmi and modify to suite your
> needs.
> >
> >> Does fenced have any handlers, I notice it logs a message in syslog and
> clus
On 04/11/14 03:55 AM, Andrei Borzenkov wrote:
В Mon, 3 Nov 2014 07:07:41 +
Alex Samad - Yieldbroker пишет:
{snip}
What I am hearing is that its not available. Is it possible to hook to
a custom script on that event, I can write my own restart
Sure you can write your own external stonit
В Mon, 3 Nov 2014 07:07:41 +
Alex Samad - Yieldbroker пишет:
> {snip}
> > > What I am hearing is that its not available. Is it possible to hook to
> > > a custom script on that event, I can write my own restart
> > >
> >
> > Sure you can write your own external stonith script.
>
>
> Any po
{snip}
> > What I am hearing is that its not available. Is it possible to hook to
> > a custom script on that event, I can write my own restart
> >
>
> Sure you can write your own external stonith script.
Any pointers to a frame work somewhere ?
Does fenced have any handlers, I notice it logs a
sage-
> > >>> From: Digimer [mailto:li...@alteeve.ca]
> > >>> Sent: Sunday, 2 November 2014 9:49 AM
> > >>> To: The Pacemaker cluster resource manager
> > >>> Subject: Re: [Pacemaker] stonith q
> > >>>
> > >&g
> -Original Message-
> From: Digimer [mailto:li...@alteeve.ca]
> Sent: Monday, 3 November 2014 3:26 AM
> To: The Pacemaker cluster resource manager; Alex Samad - Yieldbroker
> Subject: Re: [Pacemaker] stonith q
>
> On 02/11/14 06:45 AM, Andrei Borzenkov wrote:
>
On 02/11/14 06:45 AM, Andrei Borzenkov wrote:
В Sun, 2 Nov 2014 10:01:59 +
Alex Samad - Yieldbroker пишет:
-Original Message-
From: Digimer [mailto:li...@alteeve.ca]
Sent: Sunday, 2 November 2014 9:49 AM
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker
В Sun, 2 Nov 2014 10:01:59 +
Alex Samad - Yieldbroker пишет:
>
>
> > -Original Message-
> > From: Digimer [mailto:li...@alteeve.ca]
> > Sent: Sunday, 2 November 2014 9:49 AM
> > To: The Pacemaker cluster resource manager
> > Subject: Re: [Pacem
> -Original Message-
> From: Digimer [mailto:li...@alteeve.ca]
> Sent: Sunday, 2 November 2014 9:49 AM
> To: The Pacemaker cluster resource manager
> Subject: Re: [Pacemaker] stonith q
>
> On 01/11/14 06:27 PM, Alex Samad - Yieldbroker wrote:
> > Hi
> >
On 01/11/14 06:27 PM, Alex Samad - Yieldbroker wrote:
Hi
2 node cluster, running under vmware
Centos 6.5
pacemaker-libs-1.1.10-14.el6_5.3.x86_64
pacemaker-cluster-libs-1.1.10-14.el6_5.3.x86_64
pacemaker-cli-1.1.10-14.el6_5.3.x86_64
pacemaker-1.1.10-14.el6_5.3.x86_64
this is what I have in /e
Hi
2 node cluster, running under vmware
Centos 6.5
pacemaker-libs-1.1.10-14.el6_5.3.x86_64
pacemaker-cluster-libs-1.1.10-14.el6_5.3.x86_64
pacemaker-cli-1.1.10-14.el6_5.3.x86_64
pacemaker-1.1.10-14.el6_5.3.x86_64
this is what I have in /etc/cluster/cluster.conf
And pcs config
sto
@beekhof.net]
> Gesendet: Freitag, 11. Juli 2014 01:42
> An: The Pacemaker cluster resource manager
> Betreff: Re: [Pacemaker] pacemaker stonith No such device
>
>
> On 9 Jul 2014, at 8:53 pm, Dvorak Andreas
> wrote:
>
>> Dear all,
>>
>> unfortunately my
constraint location ipmi-fencing-sv2828 prefers sv2827=-INFINITY
> pcs property set stonith-enabled=true
> pcs property set no-quorum-policy=ignore
>
> Best regards
> Andreas
>
> -Ursprüngliche Nachricht-
> Von: Andrew Beekhof [mailto:and...@beekhof.net]
> G
cs property set stonith-enabled=true
pcs property set no-quorum-policy=ignore
Best regards
Andreas
-Ursprüngliche Nachricht-
Von: Andrew Beekhof [mailto:and...@beekhof.net]
Gesendet: Freitag, 11. Juli 2014 01:42
An: The Pacemaker cluster resource manager
Betreff: Re: [Pacemaker] pacemake
On 9 Jul 2014, at 8:53 pm, Dvorak Andreas wrote:
> Dear all,
>
> unfortunately my stonith does not work on my pacemaker cluster. If I do
> ifdown on the two cluster interconnect interfaces of server sv2827 the server
> sv2828 want to fence the server sv2827, but the messages log says:err
Dear all,
unfortunately my stonith does not work on my pacemaker cluster. If I do ifdown
on the two cluster interconnect interfaces of server sv2827 the server sv2828
want to fence the server sv2827, but the messages log says:error:
remote_op_done: Operation reboot of sv2827-p1 by sv2828-p1
On 17/03/14 11:52 PM, khaled atteya wrote:
Hi,
In case of two nodes cluster in case of Active/Active or Active/Passive,
if split brain happen , which node will STONITH the other ?
"The fast node" is the short answer.
The long answer is that you can give one node a priority over the other
by s
Hi,
In case of two nodes cluster in case of Active/Active or Active/Passive, if
split brain happen , which node will STONITH the other ?
--
KHALED MOHAMMED ATTEYA
System Engineer
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clus
On 22 Jan 2014, at 12:18 am, Robert Lindgren wrote:
> Hi,
>
> I'm trying to get rid of some stonith info logging but I fail :(
Turn off debug and, for everything else, edit the C source code
>
> The log-lines are like this in syslog:
> Jan 21 13:24:15 wolf1 stonith-ng: [6349]: info: stonith_
Can you trace the resource?
crm resource trace ...
Maybe, if you can do it you get more info.
2014/1/21 Robert Lindgren
> Hi,
>
> I'm trying to get rid of some stonith info logging but I fail :(
>
> The log-lines are like this in syslog:
> Jan 21 13:24:15 wolf1 stonith-ng: [6349]: info: stoni
Hi,
I'm trying to get rid of some stonith info logging but I fail :(
The log-lines are like this in syslog:
Jan 21 13:24:15 wolf1 stonith-ng: [6349]: info: stonith_command: Processed
st_execute from lrmd: rc=-1
Jan 21 13:24:15 wolf1 external/ipmi[11606]: [11616]: debug: ipmitool
output: Chassis P
Am Mittwoch, 20. November 2013, 11:33:55 schrieb Lars Marowsky-Bree:
> On 2013-11-20T11:20:45, Michael Schwartzkopff wrote:
> > I removed the pacemaker installation 1.1.9 from the opensuse build
> > server and installed the 1.1.10 from the RHEL-HA repository. now
> > everything is working as expec
On 2013-11-20T11:20:45, Michael Schwartzkopff wrote:
> I removed the pacemaker installation 1.1.9 from the opensuse build
> server and installed the 1.1.10 from the RHEL-HA repository. now
> everything is working as expected.
>
> Besides some kernel panics, that are not related to the cluster
>
Am Mittwoch, 20. November 2013, 10:28:18 schrieb Andrew Beekhof:
> On 19 Nov 2013, at 4:19 pm, Michael Schwartzkopff wrote:
> > Andrew Beekhof schrieb:
> >> On 19 Nov 2013, at 1:23 am, Michael Schwartzkopff wrote:
> >>> Hi,
> >>>
> >>> I installed pacemaker on a RHEL 6.4 machine. Now crm tells
On 19 Nov 2013, at 4:19 pm, Michael Schwartzkopff wrote:
>
>
>
>
> Andrew Beekhof schrieb:
>>
>> On 19 Nov 2013, at 1:23 am, Michael Schwartzkopff wrote:
>>
>>> Hi,
>>>
>>> I installed pacemaker on a RHEL 6.4 machine. Now crm tells me that
>> there is no
>>> stonith ra class, onyl lsb,
Andrew Beekhof schrieb:
>
>On 19 Nov 2013, at 1:23 am, Michael Schwartzkopff wrote:
>
>> Hi,
>>
>> I installed pacemaker on a RHEL 6.4 machine. Now crm tells me that
>there is no
>> stonith ra class, onyl lsb, ocf and service.
>>
>> What did I miss? thanks for any valuable comments.
>
>did
On 19 Nov 2013, at 1:23 am, Michael Schwartzkopff wrote:
> Hi,
>
> I installed pacemaker on a RHEL 6.4 machine. Now crm tells me that there is
> no
> stonith ra class, onyl lsb, ocf and service.
>
> What did I miss? thanks for any valuable comments.
did you install the fencing-agents packag
Hi,
I installed pacemaker on a RHEL 6.4 machine. Now crm tells me that there is no
stonith ra class, onyl lsb, ocf and service.
What did I miss? thanks for any valuable comments.
--
Mit freundlichen Grüßen,
Michael Schwartzkopff
--
[*] sys4 AG
http://sys4.de, +49 (89) 30 90 46 64, +49 (16
On 9 Nov 2013, at 1:55 am, s.oreilly wrote:
> Hi Chrissie, thanks I did try that and it didn't work, but then, neither has
> adding the location constraints so maybe (and this is very possible) I am
> doing
> something else wrong!!
Quite probably. But we cant say for sure without logs.
>
> S
Hi Chrissie, thanks I did try that and it didn't work, but then, neither has
adding the location constraints so maybe (and this is very possible) I am doing
something else wrong!!
Sean O'Reilly
On Fri 08/11/13 2:40 PM , Christine Caulfield ccaul...@redhat.com sent:
> On 08/11/13 14:20, emmanuel
On 08/11/13 14:20, emmanuel segura wrote:
with location constrain. if you need info about constrains, you can look
the clusterlab docs
You don't need to do that. Stonith is intelligent enough to know how to
fence a node regardless of where the device is supposedly running from.
Try it ;-)
with location constrain. if you need info about constrains, you can look
the clusterlab docs
2013/11/8 s.oreilly
> That's what I thought. How do I specify which node to run them on?
>
> Many thanks
>
> Sean O'Reilly
>
> On Fri 08/11/13 1:08 PM , "emmanuel segura" emi2f...@gmail.com sent:
> > f
That's what I thought. How do I specify which node to run them on?
Many thanks
Sean O'Reilly
On Fri 08/11/13 1:08 PM , "emmanuel segura" emi2f...@gmail.com sent:
> fence of host1 needs to be running on host2 and fence of host2 needs to be
> running on host1
>
> 2013/11/8 s.oreilly
> I am tryi
fence of host1 needs to be running on host2 and fence of host2 needs to be
running on host1
2013/11/8 s.oreilly
> I am trying to configure stonith on a 2 node cluster.
>
> Using fence_vmware_soap and it works manually
>
> I configure stonith as below
>
> pcs stonith create test-stonith1 params
I am trying to configure stonith on a 2 node cluster.
Using fence_vmware_soap and it works manually
I configure stonith as below
pcs stonith create test-stonith1 params ipaddr=vcenterserver login-login
passwd=passwd ssl=1 port=hostname1 action=reboot
pcs stonith create test-stonith2 params ipadd
Personally I use fence_xvm. IIRC, it's the supported equivalent of fence_virsh.
On 24 Oct 2013, at 6:38 pm, Beo Banks wrote:
> hi,
>
> i have enable the debug option and i use the ip instead of hostname
>
> primitive stonith-zarafa02 stonith:fence_virsh \
> params pcmk_host_list="zaraf
Am Mittwoch, 23. Oktober 2013, 12:39:35 schrieb Beo Banks:
> hi,
>
> thanks for answer.
>
> the pacemaker|corosync is running on both nodes.
>
> [chkconfig | grep corosync
> corosync0:Aus 1:Aus 2:Ein 3:Ein 4:Ein 5:Ein 6:Aus
> chkconfig | grep pacemaker
> pacemaker 0:Aus
hi,
thanks for answer.
the pacemaker|corosync is running on both nodes.
[chkconfig | grep corosync
corosync0:Aus 1:Aus 2:Ein 3:Ein 4:Ein 5:Ein 6:Aus
chkconfig | grep pacemaker
pacemaker 0:Aus 1:Aus 2:Ein 3:Ein 4:Ein 5:Ein 6:Aus
@ssh key
no, i created the ke
Am Mittwoch, 23. Oktober 2013, 10:57:28 schrieb Beo Banks:
> *hi,
>
>
> i wants to testing the fail-over capabilities of my cluster.
> i run pkill -9 corosync on 2nd node and i saw on the 1node that he wants to
> stonith the node2 but he "giving up after too many failures to fence node"
>
>
>
> via
*hi,
i wants to testing the fail-over capabilities of my cluster.
i run pkill -9 corosync on 2nd node and i saw on the 1node that he wants to
stonith the node2 but he "giving up after too many failures to fence node"
via commandline it works without any problems
fence_virsh -a host2 -l root -x
On 18/10/13 07:00, Lars Marowsky-Bree wrote:
> On 2013-10-18T11:26:52, Nikola Ciprich wrote:
>
>> I'm using pacemaker-1.1.8 (RHEL6), fence-agents-4.0.3 (I compiled this myself
>> in order to use new netio stonith plugin), corosync-1.4.1, kernel-3.10.11 (in
>> case this would be important)
>
> Un
On 2013-10-18T11:26:52, Nikola Ciprich wrote:
> I'm using pacemaker-1.1.8 (RHEL6), fence-agents-4.0.3 (I compiled this myself
> in order to use new netio stonith plugin), corosync-1.4.1, kernel-3.10.11 (in
> case this would be important)
Unless 1.1.8-rhel has some fixes backported, I think you'd
Hi Lars,
ouch, I though I have to write all the versions and of course forgot at the
end :-( sorry about that
I'm using pacemaker-1.1.8 (RHEL6), fence-agents-4.0.3 (I compiled this myself
in order to use new netio stonith plugin), corosync-1.4.1, kernel-3.10.11 (in
case this would be important)
On 2013-10-18T11:04:26, Nikola Ciprich wrote:
> Hi,
>
> I'm still trying to get fencing working with dual power supply / node
> and either I'm blind to some dumb mistake of mine, or there's some
> nasty pacemaker bug..
To anticipate the next obvious question, what's your pacemaker version?
Re
Hi,
I'm still trying to get fencing working with dual power supply / node
and either I'm blind to some dumb mistake of mine, or there's some
nasty pacemaker bug..
I've set 4 stonith resources:
node1-st1-off ... action="off"
node1-st2-off ... action="off"
node1-st1-on ... action="on"
node1-st
Hi Dejan,
and thanks for Your reply!
> Good luck with that.
ouch, that doesn't sound too encouraging :)
>
> Which version of crmsh do you run? If it's 1.2.6, please open a
> bug report. If not, please upgrade :)
great, haven't even noticed there's 1.2.6 out..
however problem persists:
crm(
Hi,
On Fri, Oct 04, 2013 at 10:43:56AM +0200, Nikola Ciprich wrote:
> Hi Guys,
>
> thanks a lot for the tip, fencing_topology seems to be exactly what I
> need! However, there seems to be the problem, I'm not sure whether
> it's me, pacemaker or stonith agent..
>
> I've set 4 stonith primitives,
Hi Guys,
thanks a lot for the tip, fencing_topology seems to be exactly what I
need! However, there seems to be the problem, I'm not sure whether
it's me, pacemaker or stonith agent..
I've set 4 stonith primitives, as per document:
primitive stonith-vbox3-1-off stonith:fence_netio \
pa
On 2013-10-03T23:50:15, Digimer wrote:
> > digimer's hack works, but it makes my eyes bleed. ;-)
> meanie!
That's not because of what you diligently debugged and described,
though, but because it's necessary. In my opinion, 90%+ of all setups
that actually need to use more than one device per le
On 03/10/13 13:14, Lars Marowsky-Bree wrote:
> digimer's hack works, but it makes my eyes bleed. ;-)
meanie!
--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?
On 03/10/13 05:42, Nikola Ciprich wrote:
> Hello,
>
> I'm playing with netio 230-CS PDU, and it works pretty well as
> fencing device for my testing pacemaker cluster. However, I'd like
> to use two such units plugged to different power sources and use
> them as fencing units for servers with re
On 2013-10-03T12:22:27, David Vossel wrote:
> > Is there some way to tell, node needs to be fenced using two fencing
> > devices? Or I'll need to create my own fencing plugin allowing to
> > use two fencing devices simultaneously?
> Not simultaneously (not sure if that is actually a requirement),
- Original Message -
> From: "Nikola Ciprich"
> To: pacemaker@oss.clusterlabs.org
> Sent: Thursday, October 3, 2013 7:42:39 AM
> Subject: [Pacemaker] stonith - using multiple fencing devices for one node to
> fence device with redundant power
> sources
>
Hello,
I'm playing with netio 230-CS PDU, and it works pretty well as
fencing device for my testing pacemaker cluster. However, I'd like
to use two such units plugged to different power sources and
use them as fencing units for servers with redundant power supplies
(each connected to one of the PD
On 20/06/2013, at 11:50 PM, Doug Clow wrote:
> I'll do some experiments to see if I can get Corosync more reliable. I'm
> using Corosync v1 as part of cman-corosync-pacemaker. RRP with one port on a
> switch and the other port on a crossover cable between the two hosts
> (although technical
I'll do some experiments to see if I can get Corosync more reliable. I'm using
Corosync v1 as part of cman-corosync-pacemaker. RRP with one port on a switch
and the other port on a crossover cable between the two hosts (although
technically each port is still part of a vSwitch since its a VMwa
echo "0" > /sys/class/net/virbr0/bridge/multicast_snooping
That results in multicast packets are broadcasted to all bridge ports. I
prefer to have igmp querier turned on on a central switch.
I had the impression that the OP used virtual machines on a local
(virtual and private) network. I
20.06.2013 03:12, Sven Arnold wrote:
> Hi Doug,
>
>> I have some 2-node active-passive clusters that occasionally lose
>> Corosync connectivity. The connectivity is fixed with a reboot.
>> They don't have shared storage so stonith doesn't have to happen for
>> another node to take control of the
On 20/06/2013, at 5:02 PM, Vladislav Bogdanov wrote:
> 20.06.2013 09:00, Andrew Beekhof wrote:
>>
>> On 20/06/2013, at 2:52 PM, Vladislav Bogdanov wrote:
>>
>>> 20.06.2013 00:36, Andrew Beekhof wrote:
On 20/06/2013, at 6:33 AM, Doug Clow wrote:
> Hello All,
>
>
20.06.2013 09:00, Andrew Beekhof wrote:
>
> On 20/06/2013, at 2:52 PM, Vladislav Bogdanov wrote:
>
>> 20.06.2013 00:36, Andrew Beekhof wrote:
>>>
>>> On 20/06/2013, at 6:33 AM, Doug Clow wrote:
>>>
Hello All,
I have some 2-node active-passive clusters that occasionally lose
On 20/06/2013, at 2:52 PM, Vladislav Bogdanov wrote:
> 20.06.2013 00:36, Andrew Beekhof wrote:
>>
>> On 20/06/2013, at 6:33 AM, Doug Clow wrote:
>>
>>> Hello All,
>>>
>>> I have some 2-node active-passive clusters that occasionally lose
>>> Corosync connectivity. The connectivity is fixed wi
20.06.2013 00:36, Andrew Beekhof wrote:
>
> On 20/06/2013, at 6:33 AM, Doug Clow wrote:
>
>> Hello All,
>>
>> I have some 2-node active-passive clusters that occasionally lose
>> Corosync connectivity. The connectivity is fixed with a reboot. They
>> don't have shared storage so stonith doesn't
Hi Doug,
I have some 2-node active-passive clusters that occasionally lose
Corosync connectivity. The connectivity is fixed with a reboot.
They don't have shared storage so stonith doesn't have to happen for
another node to take control of the resource. Also they are VMs so I
can't use a stand
On 20/06/2013, at 6:33 AM, Doug Clow wrote:
> Hello All,
>
> I have some 2-node active-passive clusters that occasionally lose Corosync
> connectivity. The connectivity is fixed with a reboot. They don't have
> shared storage so stonith doesn't have to happen for another node to take
> con
Hello All,
I have some 2-node active-passive clusters that occasionally lose Corosync
connectivity. The connectivity is fixed with a reboot. They don't have shared
storage so stonith doesn't have to happen for another node to take control of
the resource. Also they are VMs so I can't use a s
On 17/05/2013, at 12:23 AM, Brian J. Murrell wrote:
> Using Pacemaker 1.1.8 on EL6.4 with the pacemaker plugin, I'm finding
> strange behavior with "stonith-admin -B node2". It seems to shut the
> node down but not start it back up and ends up reporting a timer
> expired:
>
> # stonith_admin -
On 2013-05-16 11:01, Lars Marowsky-Bree wrote:
> On 2013-05-15T22:55:43, Andreas Kurz wrote:
>
>> start-delay is an option of the monitor operation ... in fact means
>> "don't trust that start was successfull, wait for the initial monitor
>> some more time"
>
> It can be used on start here thoug
On 2013-05-16 11:31, Klaus Darilion wrote:
> Hi Andreas!
>
> On 15.05.2013 22:55, Andreas Kurz wrote:
>> On 2013-05-15 15:34, Klaus Darilion wrote:
>>> On 15.05.2013 14:51, Digimer wrote:
On 05/15/2013 08:37 AM, Klaus Darilion wrote:
> primitive st-pace1 stonith:external/xen0 \
>
Using Pacemaker 1.1.8 on EL6.4 with the pacemaker plugin, I'm finding
strange behavior with "stonith-admin -B node2". It seems to shut the
node down but not start it back up and ends up reporting a timer
expired:
# stonith_admin -B node2
Command failed: Timer expired
The pacemaker log for the op
Hi Andreas!
On 15.05.2013 22:55, Andreas Kurz wrote:
On 2013-05-15 15:34, Klaus Darilion wrote:
On 15.05.2013 14:51, Digimer wrote:
On 05/15/2013 08:37 AM, Klaus Darilion wrote:
primitive st-pace1 stonith:external/xen0 \
params hostlist="pace1" dom0="xentest1" \
op start s
On 2013-05-15T22:55:43, Andreas Kurz wrote:
> start-delay is an option of the monitor operation ... in fact means
> "don't trust that start was successfull, wait for the initial monitor
> some more time"
It can be used on start here though to avoid exactly this situation; and
it works fine for t
On 2013-05-15 15:34, Klaus Darilion wrote:
> On 15.05.2013 14:51, Digimer wrote:
>> On 05/15/2013 08:37 AM, Klaus Darilion wrote:
>>> primitive st-pace1 stonith:external/xen0 \
>>> params hostlist="pace1" dom0="xentest1" \
>>> op start start-delay="15s" interval="0"
>>
>> Try;
>>
On 05/15/2013 09:34 AM, Klaus Darilion wrote:
On 15.05.2013 14:51, Digimer wrote:
On 05/15/2013 08:37 AM, Klaus Darilion wrote:
primitive st-pace1 stonith:external/xen0 \
params hostlist="pace1" dom0="xentest1" \
op start start-delay="15s" interval="0"
Try;
primitive st-pac
On 15.05.2013 14:51, Digimer wrote:
On 05/15/2013 08:37 AM, Klaus Darilion wrote:
primitive st-pace1 stonith:external/xen0 \
params hostlist="pace1" dom0="xentest1" \
op start start-delay="15s" interval="0"
Try;
primitive st-pace1 stonith:external/xen0 \
params host
On 05/15/2013 08:37 AM, Klaus Darilion wrote:
primitive st-pace1 stonith:external/xen0 \
params hostlist="pace1" dom0="xentest1" \
op start start-delay="15s" interval="0"
Try;
primitive st-pace1 stonith:external/xen0 \
params hostlist="pace1" dom0="xentest1" delay="15
Hi!
I have a 2 nodes cluster: a simple test setup with a
ocf:heartbeat:IPaddr2 resource, using xen VMs and stonith:external/xen0.
Please see the complete config below.
Basically everything works fine, except in the case of broken corosync
communication between the nodes (simulated by shuttin
On 13-03-25 03:50 PM, Jacek Konieczny wrote:
>
> The first node to notice that the other is unreachable will fence (kill)
> the other, making sure it is the only one operating on the shared data.
Right. But with typical two-node clusters ignoring no-quorum, because
quorum is being ignored, as so
On Tue, Mar 26, 2013 at 6:30 PM, Angel L. Mateo wrote:
> El 25/03/13 20:50, Jacek Konieczny escribió:
>
>> On Mon, 25 Mar 2013 20:01:28 +0100
>> "Angel L. Mateo" wrote:
quorum {
provider: corosync_votequorum
expected_votes: 2
two_node: 1
}
>>>
El 25/03/13 20:50, Jacek Konieczny escribió:
On Mon, 25 Mar 2013 20:01:28 +0100
"Angel L. Mateo" wrote:
quorum {
provider: corosync_votequorum
expected_votes: 2
two_node: 1
}
Corosync will then manage quorum for the two-node cluster and
Pacemaker
I'm using corosync
On Mon, 25 Mar 2013 20:01:28 +0100
"Angel L. Mateo" wrote:
> >quorum {
> > provider: corosync_votequorum
> > expected_votes: 2
> > two_node: 1
> >}
> >
> >Corosync will then manage quorum for the two-node cluster and
> >Pacemaker
>
> I'm using corosync 1.1 which is the one provided
Jacek Konieczny escribió:
>On Mon, 25 Mar 2013 13:54:22 +0100
>> My problem is how to avoid split brain situation with this
>> configuration, without configuring a 3rd node. I have read about
>> quorum disks, external/sbd stonith plugin and other references, but
>> I'm too confused with a
On Mon, 25 Mar 2013 13:54:22 +0100
> My problem is how to avoid split brain situation with this
> configuration, without configuring a 3rd node. I have read about
> quorum disks, external/sbd stonith plugin and other references, but
> I'm too confused with all this.
>
> For example, [
I have a production cluster, using two vm on esx cluster, for stonith i'm
using sbd, everything work find
2013/3/25 Angel L. Mateo
> Hello,
>
> I am newbie with pacemaker (and, generally, with ha clusters). I
> have configured a two nodes cluster. Both nodes are virtual machines
> (vmwar
I have a production cluster, using two vm on esx cluster, for stonith i'm
using sbd, everything work fine
2013/3/25 emmanuel segura
> I have a production cluster, using two vm on esx cluster, for stonith i'm
> using sbd, everything work find
>
> 2013/3/25 Angel L. Mateo
>
>> Hello,
>>
>>
Hello,
I am newbie with pacemaker (and, generally, with ha clusters). I have
configured a two nodes cluster. Both nodes are virtual machines (vmware
esx) and use a shared storage (provided by a SAN, although access to the
SAN is from esx infrastructure and VM consider it as scsi disk). I have
On Thu, Jan 31, 2013 at 9:36 AM, Andreas Kurz wrote:
> On 2013-01-30 20:51, Matthew O'Connor wrote:
>> Hi! I must be doing something stupidly wrong... every time I add a new
>> node to my live cluster, the first thing the cluster decides to do is
>> STONITH the node, and despite any precautions
On 2013-01-30T14:51:33, Matthew O'Connor wrote:
> Hi! I must be doing something stupidly wrong... every time I add a new
> node to my live cluster, the first thing the cluster decides to do is
> STONITH the node, and despite any precautions I take (other than
> flat-out disabling STONITH during
Ah, very good - thank you so much!!
On 01/30/2013 05:36 PM, Andreas Kurz wrote:
> On 2013-01-30 20:51, Matthew O'Connor wrote:
>> Hi! I must be doing something stupidly wrong... every time I add a new
>> node to my live cluster, the first thing the cluster decides to do is
>> STONITH the node,
On 2013-01-30 20:51, Matthew O'Connor wrote:
> Hi! I must be doing something stupidly wrong... every time I add a new
> node to my live cluster, the first thing the cluster decides to do is
> STONITH the node, and despite any precautions I take (other than
> flat-out disabling STONITH during the
Hi! I must be doing something stupidly wrong... every time I add a new
node to my live cluster, the first thing the cluster decides to do is
STONITH the node, and despite any precautions I take (other than
flat-out disabling STONITH during the reconfiguration). Is this
normal? I'm currently run
On Sat, Nov 17, 2012 at 12:01 AM, Denny Schierz wrote:
> hi,
>
> I've tried the latest fence_legacy, but only getstate works, but not the
> reset one ..
>
> https://raw.github.com/ClusterLabs/pacemaker/master/fencing/fence_legacy
>
> I've tested also the complete path:
>
> /vmfs/volumes/4c528367-
hi,
I've tried the latest fence_legacy, but only getstate works, but not the reset
one ..
https://raw.github.com/ClusterLabs/pacemaker/master/fencing/fence_legacy
I've tested also the complete path:
/vmfs/volumes/4c528367-19e3e2c9-9871-0021288ea4ad/sqlnode-02/sqlnode-02.vmx
again, works for g
1 - 100 of 314 matches
Mail list logo