Re: [ClusterLabs] Antw: How to clean up failed fencing action?

2019-08-05 Thread Klaus Wenninger
On 8/5/19 3:00 PM, Ulrich Windl wrote: Andrei Borzenkov schrieb am 03.08.2019 um 18:17 in > Nachricht <35a226a8-115b-4dc0-f505-dbd78cdd7...@gmail.com>: >> I'm using sbd watchdog and stonith‑watchdog‑timeout without explicit >> stonith agents (shared nothing cluster). How can I clean up

[ClusterLabs] Antw: How to clean up failed fencing action?

2019-08-05 Thread Ulrich Windl
>>> Andrei Borzenkov schrieb am 03.08.2019 um 18:17 in Nachricht <35a226a8-115b-4dc0-f505-dbd78cdd7...@gmail.com>: > I'm using sbd watchdog and stonith‑watchdog‑timeout without explicit > stonith agents (shared nothing cluster). How can I clean up failed > fencing action? > > Current DC: ha1

Re: [ClusterLabs] Query on HA

2019-08-05 Thread Andrei Borzenkov
There is no one-size-fits-all answer. You should enable and configure stonith in pacemaker (which is disabled, otherwise described situation would not happen). You may consider wait_for_all (or better two_node) options in corosync that would prevent pacemaker to start unless both nodes are up. On

[ClusterLabs] Query on HA

2019-08-05 Thread Aloknath
Hi, I am beginner in Clusters and am facing one issue. I have a 2 node cluster designed for fail-over function. Pacemaker and corosync are cluster mangers used be build it. Scenrio : This happens when both the nodes are powered off together and then again booted at same time. Issue: node2