I always start w/ the bare minimum. From there I've added a bit more (e.g. stonith) to see what it's behavior was. My experience w/ DRBD has been solid and always worked w/ everything from Centos 5 through Centos 7. Centos 8 is a different animal. Haven't gotten that to work. I have a feeling it's still buggy (either pacemaker or more likely DRBD/DRBD utils versions). Brent

On 1/18/2021 6:04 AM, Strahil Nikolov wrote:
Can you check this old thread : https://forums.centos.org/viewtopic.php?t=65539

Theoretically El7 vs EL8 drbd cluster should not be different. Maybe you missed 
something. Also try with the simplest drbd conf possible and if it starst 
working - try to add option by option to identify which is causing thetrouble.

Best Regards,
Strahil Nikolov






В събота, 16 януари 2021 г., 17:51:05 Гринуич+2, Brent Jensen 
<jener...@gmail.com> написа:





Maybe. I haven't focused on any stickiness w/ which node is generally
master or not. Going standby on the master node should move the slave to
master. I'm just trying to follow what's been tried and true in the past
and it's no longer following what I would expect. About the only change
is the way pacemaker is setup (pacemaker ver 1 uses a master resource,
whereas ver 2 uses a promotable resource. I'm not 100% sure I'm doing it
right because I haven't gotten it working.

Working DRBD resource (pacemaker 1):
  Master: ms_drbd0
   Meta Attrs: clone-max=2 clone-node-max=1 master-max=1
master-node-max=1 notify=true target-role=Started
   Resource: drbd0 (class=ocf provider=linbit type=drbd)
    Attributes: drbd_resource=r0
    Operations: demote interval=0s timeout=90 (drbd0-demote-interval-0s)
                monitor interval=15 role=Master (drbd0-monitor-interval-15)
                monitor interval=30 role=Slave (drbd0-monitor-interval-30)
                notify interval=0s timeout=90 (drbd0-notify-interval-0s)
                promote interval=0s timeout=90 (drbd0-promote-interval-0s)
                reload interval=0s timeout=30 (drbd0-reload-interval-0s)
                start interval=0 timeout=240s (drbd0-start-interval-0)
                stop interval=0 timeout=100s (drbd0-stop-interval-0)


This resource (pacemaker 2):
  Clone: drbd0-clone
   Meta Attrs: clone-max=2 clone-node-max=1 notify=true promotable=true
promoted-max=1 promoted-node-max=1 target-role=Started
   Resource: drbd0 (class=ocf provider=linbit type=drbd)
    Attributes: drbd_resource=r0
    Operations: demote interval=0s timeout=90 (drbd0-demote-interval-0s)
                monitor interval=20 role=Slave timeout=20
(drbd0-monitor-interval-20)
                monitor interval=10 role=Master timeout=20
(drbd0-monitor-interval-10)
                notify interval=0s timeout=90 (drbd0-notify-interval-0s)
                promote interval=0s timeout=90 (drbd0-promote-interval-0s)
                reload interval=0s timeout=30 (drbd0-reload-interval-0s)
                start interval=0s timeout=240 (drbd0-start-interval-0s)
                stop interval=0s timeout=100 (drbd0-stop-interval-0s)

On 1/15/2021 5:16 PM, Ulrich Windl wrote:
On 1/15/21 10:10 PM, Brent Jensen wrote:
pacemaker-attrd[7671]: notice: Setting master-drbd0[nfs5]: 10000 -> 1000
I wonder: Does that mean the stickiness for master is still 1000 on nfs5?


_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

--
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to