From: Jan Friesse [mailto:jfrie...@redhat.com]
Sent: Tuesday, July 25, 2017 11:59 AM
To: Cluster Labs - All topics related to open-source clustering
welcomed <mailto:users@clusterlabs.org>; mailto:kwenn...@redhat.com; Prasad,
Shashank <mailto:sspra...@vanu.com>
Subject: Re: [Cl
-source clustering welcomed
<users@clusterlabs.org>; kwenn...@redhat.com; Prasad, Shashank
<sspra...@vanu.com>
Subject: Re: [ClusterLabs] Two nodes cluster issue
Tomer Azran napsal(a):
> Just updating that I added another level of fencing using watchdog-fencing.
> With
[mailto:jfrie...@redhat.com]
Sent: Tuesday, July 25, 2017 11:59 AM
To: Cluster Labs - All topics related to open-source clustering welcomed
<users@clusterlabs.org>; kwenn...@redhat.com; Prasad, Shashank
<sspra...@vanu.com>
Subject: Re: [ClusterLabs] Two nodes cluster issue
Tomer
Please ignore my re-reply to the original message, I'm in the middle of
a move and am getting by on little sleep at the moment :-)
On Mon, 2017-07-31 at 09:26 -0500, Ken Gaillot wrote:
> On Mon, 2017-07-24 at 11:51 +, Tomer Azran wrote:
> > Hello,
> >
> >
> >
> > We built a pacemaker
On Mon, 2017-07-24 at 11:51 +, Tomer Azran wrote:
> Hello,
>
>
>
> We built a pacemaker cluster with 2 physical servers.
>
> We configured DRBD in Master\Slave setup, a floating IP and file
> system mount in Active\Passive mode.
>
> We configured two STONITH devices (fence_ipmilan), one
.az...@edp.co.il>; Cluster Labs - All topics related to
open-source clustering welcomed <users@clusterlabs.org>; Prasad, Shashank
<sspra...@vanu.com>
Subject: Re: [ClusterLabs] Two nodes cluster issue
On 07/24/2017 11:59 PM, Tomer Azran wrote:
There is a problem with that – it seems like SB
rom: Klaus Wenninger [mailto:kwenn...@redhat.com]
Sent: Monday, July 24, 2017 9:01 PM
To: Cluster Labs - All topics related to open-source clustering
welcomed <users@clusterlabs.org>; Prasad, Shashank <sspra...@vanu.com>
Subject: Re: [ClusterLabs] Two nodes cluster issue
On 07/24/2017 07:3
-source clustering welcomed
<users@clusterlabs.org>; Prasad, Shashank <sspra...@vanu.com>
Subject: Re: [ClusterLabs] Two nodes cluster issue
On 07/24/2017 07:32 PM, Prasad, Shashank wrote:
Sometimes IPMI fence devices use shared power of the node, and it cannot be
avoided.
In such scen
o.il]
Sent: Tuesday, July 25, 2017 3:00 AM
To: kwenn...@redhat.com; Cluster Labs - All topics related to open-source
clustering welcomed; Prasad, Shashank
Subject: RE: [ClusterLabs] Two nodes cluster issue
I tend to agree with Klaus – I don't think that having a hook that bypass
stonith is the
>
>
> From: Klaus Wenninger [mailto:kwenn...@redhat.com]
> Sent: Monday, July 24, 2017 9:01 PM
> To: Cluster Labs - All topics related to open-source clustering
> welcomed <users@clusterlabs.org>; Prasad, Shashank <sspra...@vanu.com>
> Subject: Re: [ClusterLabs] Two no
ashank <sspra...@vanu.com>
Subject: Re: [ClusterLabs] Two nodes cluster issue
On 07/24/2017 07:32 PM, Prasad, Shashank wrote:
Sometimes IPMI fence devices use shared power of the node, and it cannot be
avoided.
In such scenarios the HA cluster is NOT able to handle the power failure of a
;> out of those situations in the absence of SBD, I believe using
>>>> used-defined failover hooks (via scripts) into Pacemaker Alerts, with sudo
>>>> permissions for ‘hacluster’, should help.
>>>
>>> If you don't see your fencing device assuming after some time
;> the the corresponding node will probably be down is quite risky
>> in my opinion.
>> But why not assure it to be down using a watchdog?
>>
>>>
>>> Thanx.
>>>
>>>
>>> *From:* Klaus Wenninger [mailto:kwe
bably be down is quite risky
> in my opinion.
> But why not assure it to be down using a watchdog?
>
>>
>> Thanx.
>>
>>
>> From: Klaus Wenninger [mailto:kwenn...@redhat.com
>> <mailto:kwenn...@redhat.com>]
>> Sent: Monday, July 24
aus Wenninger [mailto:kwenn...@redhat.com]
> *Sent:* Monday, July 24, 2017 11:31 PM
> *To:* Cluster Labs - All topics related to open-source clustering
> welcomed; Prasad, Shashank
> *Subject:* Re: [ClusterLabs] Two nodes cluster issue
>
>
>
> On 07/24/2017 07:32 PM
t; Thanx.
>
>
>
>
>
> *From:*Klaus Wenninger [mailto:kwenn...@redhat.com]
> *Sent:* Monday, July 24, 2017 9:24 PM
> *To:* Kristián Feldsam; Cluster Labs - All topics related to
> open-source clustering welcomed
> *Subject:* Re: [ClusterLabs] Two nodes cluster issue
>
>
to be configured.
Thanx.
From: Klaus Wenninger [mailto:kwenn...@redhat.com]
Sent: Monday, July 24, 2017 9:24 PM
To: Kristián Feldsam; Cluster Labs - All topics related to open-source
clustering welcomed
Subject: Re: [ClusterLabs] Two nodes cluster issue
On 07/24/2017 05:37 PM, Kristián Feldsam
On 07/24/2017 05:37 PM, Kristián Feldsam wrote:
> I personally think that power off node by switched pdu is more safe,
> or not?
True if that is working in you environment. If you can't do a physical setup
where you aren't simultaneously loosing connection to both your node and
the switch-device
reliably and that
the other
node is assuming it to be down after a timeout you configured using cluster
property stonith-watchdog-timeout.
>
>
> From: Klaus Wenninger
> Sent: Monday, July 24, 18:28
> Subject: Re: [ClusterLabs] Two nodes cluster issue
> To: Cluster Labs - All topics relate
I personally think that power off node by switched pdu is more safe, or not?
S pozdravem Kristián Feldsam
Tel.: +420 773 303 353, +421 944 137 535
E-mail.: supp...@feldhost.cz
www.feldhost.cz - FeldHost™ – profesionální hostingové a serverové služby za
adekvátní ceny.
FELDSAM s.r.o.
V rohu
So your suggestion is to use sbd with or without qdevice? What is the point of
having a qdevice in two node cluster if it doesn't help in this situation?
From: Klaus Wenninger
Sent: Monday, July 24, 18:28
Subject: Re: [ClusterLabs] Two nodes cluster issue
To: Cluster Labs - All topics related
On 07/24/2017 05:15 PM, Tomer Azran wrote:
> I still don't understand why the qdevice concept doesn't help on this
> situation. Since the master node is down, I would expect the quorum to
> declare it as dead.
> Why doesn't it happens?
That is not how quorum works. It just limits the
I still don't understand why the qdevice concept doesn't help on this
situation. Since the master node is down, I would expect the quorum to declare
it as dead.
Why doesn't it happens?
On Mon, Jul 24, 2017 at 4:15 PM +0300, "Dmitri Maziuk"
APC AP7921 is just for 200€ on ebay.
S pozdravem Kristián Feldsam
Tel.: +420 773 303 353, +421 944 137 535
E-mail.: supp...@feldhost.cz
www.feldhost.cz - FeldHost™ – profesionální hostingové a serverové služby za
adekvátní ceny.
FELDSAM s.r.o.
V rohu 434/3
Praha 4 – Libuš, PSČ 142 00
IČ: 290
On 2017-07-24 07:51, Tomer Azran wrote:
We don't have the ability to use it.
Is that the only solution?
No, but I'd recommend thinking about it first. Are you sure you will
care about your cluster working when your server room is on fire? 'Cause
unless you have halon suppression, your server
We don't have the ability to use it.
Is that the only solution?
In addition, it will not cover a scenario that the server room is down (for
example - fire or earthquake), the switch will go down as well.
From: Klaus Wenninger
Sent: Monday, July 24, 15:31
Subject: Re: [ClusterLabs] Two nodes
On 07/24/2017 02:05 PM, Kristián Feldsam wrote:
> Hello, you have to use second fencing device, for ex. APC Switched PDU.
>
> https://wiki.clusterlabs.org/wiki/Configure_Multiple_Fencing_Devices_Using_pcs
Problem here seems to be that the fencing devices available are running from
the same
Hello, you have to use second fencing device, for ex. APC Switched PDU.
https://wiki.clusterlabs.org/wiki/Configure_Multiple_Fencing_Devices_Using_pcs
S pozdravem Kristián Feldsam
Tel.: +420 773 303 353, +421 944 137 535
E-mail.: supp...@feldhost.cz
www.feldhost.cz - FeldHost™ – profesionální
Hello,
We built a pacemaker cluster with 2 physical servers.
We configured DRBD in Master\Slave setup, a floating IP and file system mount
in Active\Passive mode.
We configured two STONITH devices (fence_ipmilan), one for each server.
We are trying to simulate a situation when the Master server
29 matches
Mail list logo