Hi,
Thank you so much! Would you please advise more on this following case:
The cluster I am trying to setup is Postgresql with replication streaming
with PAF. So, it will decide one node as a master and 3 standby nodes.
So, with this, from what I understand from Postgresql, having 2
Hi,
Thank you so so much for your help. May i ask a following up question:
For the option of having one big cluster with 4 nodes without booth, then,
if one site (having 2 nodes) is down, then the other site does not work as
it does not have quorum, am I right? Even if we have a quorum voter in
We use the fence_kump* code extensively in production and have never had any
problems with it (other than the normal initial configuration challenges).
Kernel panic + kdump is our most common failure mode, so we exercise this code
quite a bit.
Thanks,
Chris
From: Users
Date: Thursday,
Hi,
On 24/02/2022 14:19, Viet Nguyen wrote:
Hi,
Thank you so much! Would you please advise more on this following case:
The cluster I am trying to setup is Postgresql with replication streaming
with PAF. So, it will decide one node as a master and 3 standby nodes.
So, with this, from what I
On Thu, Feb 24, 2022 at 1:17 PM Jan Friesse wrote:
>
> On 24/02/2022 10:28, Viet Nguyen wrote:
> > Hi,
> >
> > Thank you so so much for your help. May i ask a following up question:
> >
> > For the option of having one big cluster with 4 nodes without booth, then,
> > if one site (having 2 nodes)
Hi!
After reading about fence_kdump and fence_kdump_send I wonder:
Does anybody use that in production?
Having the networking and bonding in initrd does not sound like a good idea to
me.
Wouldn't it be easier to integrate that functionality into sbd?
I mean: Let sbd wait for a "kdump-ed" message
Hi!
I just discovered this oddity for a SLES15 SP3 cluster:
Feb 24 11:16:17 h16 pacemaker-attrd[7274]: notice: Setting val_net_gw1[h18]:
1000 -> 139000
That surprised me, because usually the value is 1000 or 0.
Diggding a bit further I found:
Migration Summary:
* Node: h18:
*
On 24/02/2022 10:28, Viet Nguyen wrote:
Hi,
Thank you so so much for your help. May i ask a following up question:
For the option of having one big cluster with 4 nodes without booth, then,
if one site (having 2 nodes) is down, then the other site does not work as
it does not have quorum, am I