Using on-fail=fence is what I initially tried, but it doesn't work
unfortunately.
It looks like this is because the ethmonitor monitor operation won't
actually fail when it detects a downed interface. It'll only fail if it
is unable to update the CIB, as per this comment:
The mentioned error occurs when attempting to promote the PostgreSQL
resource on the standby node, after the master PostgreSQL resource is
stopped.
For info, here is my configuration:
Corosync Nodes:
node1.local node2.local
Pacemaker Nodes:
node1.local node2.local
Resources:
Clone:
crm node online testnfs32
that fixed it, thanks Andrei
On Sun, Nov 14, 2021 at 11:47 AM Neil McFadyen
wrote:
> I have a Ubuntu 20.04 drbd nfs pacemaker/corosync setup for 2 nodes, it
> was working fine before but now I can't get the 2nd node to show as a slave
> under the Clone Set. So if I
Also, check what 'drbdadm' has to tell you. Both nodes should be in sync,
otherwise pacemaker will prevent the failover.
Best Regards,Strahil Nikolov
On Sun, Nov 14, 2021 at 20:09, Andrei Borzenkov wrote:
On 14.11.2021 19:47, Neil McFadyen wrote:
> I have a Ubuntu 20.04 drbd nfs
On 14.11.2021 19:47, Neil McFadyen wrote:
> I have a Ubuntu 20.04 drbd nfs pacemaker/corosync setup for 2 nodes, it
> was working fine before but now I can't get the 2nd node to show as a slave
> under the Clone Set. So if I do a failover both nodes show as stopped.
>
>
I have a Ubuntu 20.04 drbd nfs pacemaker/corosync setup for 2 nodes, it
was working fine before but now I can't get the 2nd node to show as a slave
under the Clone Set. So if I do a failover both nodes show as stopped.
root@testnfs30:/etc/drbd.d# crm status
Cluster Summary:
* Stack: corosync