Re: [ClusterLabs] ocf_heartbeat_pgsql - lock file
L Seems like as of 2019 this is a normal reaction. There looks like there was discussion to find a way to make it automatic but as of now the issue still remains open. See this issue for the details: https://github.com/ClusterLabs/resource-agents/issues/699 Brian > On 7. May 2023, at 12:56 PM, lejeczek via Users wrote: > > Hi guys. > > I have a resource seemingly running a ok but when a node gets rebooted then > cluster finds it not able, not good to start the resource. > > Failed Resource Actions: > * PGSQL start on podnode3 returned 'error' (My data may be inconsistent. > You have to remove /var/lib/pgsql/tmp/PGSQL.lock file to force start.) at Sun > May 7 11:48:43 2023 after 121ms > > and indeed, with manual intervention, after removal of that file, cluster > seems to be happy to rejoin the node into pgsql cluster. > Is that intentional, by design and if yes/no then why it happens? > > many thanks, L. > ___ > Manage your subscription: > https://lists.clusterlabs.org/mailman/listinfo/users > > ClusterLabs home: https://www.clusterlabs.org/ ___ Manage your subscription: https://lists.clusterlabs.org/mailman/listinfo/users ClusterLabs home: https://www.clusterlabs.org/
[ClusterLabs] Best DRBD Setup
Hi all I’ve been working on my home cluster setup for a while now and tried varying setups for DRBD resources and finally settled on one that I think is the best but I’m still not completely satisfied with the results. Biggest question is, did I set this up in the best way? Any advice would be appreciated. I have multiple servers setup in groups and the DRBDs are in separate clones like below. I added constraints to hopefully ensure things work together well. If you have any questions to clarify my setup let me know. * Resource Group: git-server: * gitea-mount (ocf:heartbeat:Filesystem): Started node1 * git-ip (ocf:heartbeat:IPaddr2): Started node1 * gitea (systemd:gitea): Started node1 * backup-gitea (systemd:backupgitea.timer): Started node1 * Resource Group: pihole-server: * pihole-mount (ocf:heartbeat:Filesystem): Started node2 * pihole-ip (ocf:heartbeat:IPaddr2): Started node2 * pihole-ftl (systemd:pihole-FTL): Started node2 * pihole-web (systemd:lighttpd): Started node2 * pihole-cron (ocf:heartbeat:symlink): Started node2 * pihole-backup (systemd:backupDRBD@pihole.timer): Started node2 * Clone Set: drbd-gitea-clone [drbd-gitea] (promotable): * Promoted: [ node1 ] * Unpromoted: [ node2 node3 node4 node5 ] * Clone Set: drbd-pihole-clone [drbd-pihole] (promotable): * Promoted: [ node2 ] * Unpromoted: [ node1 node3 node4 node5 ] Ordering Constraints: start drbd-gitea-clone then start gitea-mount (kind:Mandatory) start drbd-pihole-clone then start pihole-mount (kind:Mandatory) Colocation Constraints: pihole-server with drbd-pihole-clone (score:INFINITY) (rsc-role:Started) (with-rsc-role:Promoted) git-server with drbd-gitea-clone (score:INFINITY) (rsc-role:Started) (with-rsc-role:Promoted) My setup is on five raspberry pis running ubuntu server 22.10 with: pacemaker 2.1.4-2ubuntu1 pcs 0.11.3-1ubuntu1 drbd 9.2.2-1ppa1~jammy1 Overall the setup works but it seems quite fragile. I suffer from lots of fencing whenever I reboot a server and it doesn’t want to restart correctly. Another thing I have noticed is that it will sometimes take as long as 10-12 minutes to mount one of the DRBD filesystems (XFS) so I have extended the start timeout for each *-mount to 15 minutes. Thanks in advance for any advice to improve the setup. Brian___ Manage your subscription: https://lists.clusterlabs.org/mailman/listinfo/users ClusterLabs home: https://www.clusterlabs.org/