>>> Stuart Massey schrieb am 22.01.2021 um 14:08 in
Nachricht
:
> Hi Ulrich,
> Thank you for your response.
> It makes sense that this would be happening on the failing, secondary/slave
> node, in which case we might expect drbd to be restarted (the service
> entirely, since it is already
>You need to :
>- Setup and TEST stonith
>- Add a 3rd node (even if it doesn't host any resources) or setup a
>node for kronosnet
Thank you Strahil, looking into it.
Regards
Jaikumar
___
Manage your subscription:
> How to handle it?
You need to :
- Setup and TEST stonith
- Add a 3rd node (even if it doesn't host any resources) or setup a
node for kronosnet
Best Regards,
Strahil Nikolov
___
Manage your subscription:
Hi all,
Right off the bat; I'm using a custom RA so this behaviour might be a
bug in my agent.
I had a test server (srv01-test) running on node 1 (el8-a01n01), and on
node 2 (el8-a01n02) I ran 'pcs cluster stop --all'.
It appears like pacemaker asked the VM to migrate to node 2 instead of