[ClusterLabs] Making xt_cluster IP load-sharing work with IPv6 (Was: Concept of a Shared ipaddress/resource for generic applicatons)[

2020-01-02 Thread Jan Pokorný
On 27/12/19 15:04 +0100, Valentin Vidić wrote: > On Wed, Dec 04, 2019 at 02:44:49PM +0100, Jan Pokorný wrote: >> For the record, based on my feedback, iptables-extensions man page is >> headed to (finally) align with the actual in-kernel deprecation >> message: >>

[ClusterLabs] [RFC] LoadSharingIP agent idea, xt_cluster/IPv6/nftables (Was: Support for xt_cluster)

2020-01-02 Thread Jan Pokorný
On 19/12/19 10:18 -0600, Ken Gaillot wrote: > On Thu, 2019-12-19 at 15:01 +, Marcus Vinicius wrote: >> Is there any intention to abandon CLUSTERIP > > yes > >> in favor of xt_cluster.ko? > > no > > :) > > A recent thread about this: >

Re: [ClusterLabs] build error pacemaker

2020-01-02 Thread Christopher Lumens
> the change you proposed did not work. But then I noticed, I dont have > installed ncurses-devel - after installation the compilation went OK. I > think it is solved for my case, but probably some bug remains... ncurses is not a strict requirement of pacemaker, so we can't really make its

Re: [ClusterLabs] build error pacemaker

2020-01-02 Thread Kamil Poturnaj
Hello, the change you proposed did not work. But then I noticed, I dont have installed ncurses-devel - after installation the compilation went OK. I think it is solved for my case, but probably some bug remains... Thank you ! Kamil ___ Manage your

Re: [ClusterLabs] build error pacemaker

2020-01-02 Thread Christopher Lumens
> *crm_mon_curses.c:300:1: error: args to be formatted is not > ‘...’ curses_indented_printf(pcmk__output_t *out, const char *format, > va_list args) { ^~crm_mon_curses.c:300:1: error: > conflicting types for ‘curses_indented_printf’In file included from >

Re: [ClusterLabs] Prevent Corosync Qdevice Failback in split brain scenario.

2020-01-02 Thread Jan Friesse
Somanath, Hi , I am planning to use Corosync Qdevice version 3.0.0 with corosync version 2.4.4 and pacemaker 1.1.16 in a two node cluster. I want to know if failback can be avoided in the below situation. 1. The pcs cluster is in split brain scenario after a network break between two

Re: [ClusterLabs] SBD restarted the node while pacemaker in maintenance mode

2020-01-02 Thread Klaus Wenninger
On 12/26/19 9:27 AM, Roger Zhou wrote: > On 12/24/19 11:48 AM, Jerry Kross wrote: >> Hi, >> The pacemaker cluster manages a 2 node database cluster configured to use 3 >> iscsi disk targets in its stonith configuration. The pacemaker cluster was >> put >> in maintenance mode but we see SBD