Re: [ClusterLabs] I question whether STONITH is working.
Klaus Wenninger wrote on 02/16/2017 03:27:07 AM: > From: Klaus Wenninger > To: kgail...@redhat.com, Cluster Labs - All topics related to open- > source clustering welcomed > Date: 02/16/2017 03:27 AM > Subject: Re: [ClusterLabs] I question whether STONITH is working. > > On 02/15/2017 10:30 PM, Ken Gaillot wrote: > > On 02/15/2017 12:17 PM, dur...@mgtsciences.com wrote: > >> I have 2 Fedora VMs (node1, and node2) running on a Windows 10 machine > >> using Virtualbox. > >> > >> I began with this. > >> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/ > Clusters_from_Scratch/ > >> > >> > >> When it came to fencing, I refered to this. > >> http://www.linux-ha.org/wiki/SBD_Fencing > >> > >> To the file /etc/sysconfig/sbd I added these lines. > >> SBD_OPTS="-W" > >> SBD_DEVICE="/dev/sdb1" > >> I added 'modprobe softdog' to rc.local > >> > >> After getting sbd working, I resumed with Clusters from Scratch, chapter > >> 8.3. > >> I executed these commands *only* one node1. Am I suppose to run any of > >> these commands on other nodes? 'Clusters from Scratch' does not specify. > > Configuration commands only need to be run once. The cluster > > synchronizes all changes across the cluster. > > > >> pcs cluster cib stonith_cfg > >> pcs -f stonith_cfg stonith create sbd-fence fence_sbd > >> devices="/dev/sdb1" port="node2" > > The above command creates a fence device configured to kill node2 -- but > > it doesn't tell the cluster which nodes the device can be used to kill. > > Thus, even if you try to fence node1, it will use this device, and node2 > > will be shot. > > > > The pcmk_host_list parameter specifies which nodes the device can kill. > > If not specified, the device will be used to kill any node. So, just add > > pcmk_host_list=node2 here. > > > > You'll need to configure a separate device to fence node1. > > > > I haven't used fence_sbd, so I don't know if there's a way to configure > > it as one device that can kill both nodes. > > fence_sbd should return a proper dynamic-list. > So without ports and host-list it should just work fine. > Not even a host-map should be needed. Or actually it is not > supported because if sbd is using different node-naming than > pacemaker, pacemaker-watcher within sbd is gonna fail. It was said that 'port=' is not needed, that if I used the command below it would just work (as I understood what was being said). So I deleted using this command. pcs -f stonith_cfg stonith delete sbd-fence Recreated without 'port='. pcs -f stonith_cfg stonith create sbd-fence fence_sbd devices="/dev/sdb1" pcs cluster cib-push stonith_cfg >From node2 I executed this command. stonith_admin --reboot node1 But node2 rebooted anyway. If I follow what Ken shared, I would need another 'watchdog' in addition to another sbd device. Are multiple watchdogs possible? I am lost at this point. I have 2 VM nodes running Fedora25 on a Windows 10 host. Every node in a cluster needs to be fenced (as I understand it). Using SBD, what is the correct way to proceed? Thank you, Durwin > > > > >> pcs -f stonith_cfg property set stonith-enabled=true > >> pcs cluster cib-push stonith_cfg > >> > >> I then tried this command from node1. > >> stonith_admin --reboot node2 > >> > >> Node2 did not reboot or even shutdown. the command 'sbd -d /dev/sdb1 > >> list' showed node2 as off, but I was still logged into it (cluster > >> status on node2 showed not running). > >> > >> I rebooted and ran this command on node 2 and started cluster. > >> sbd -d /dev/sdb1 message node2 clear > >> > >> If I ran this command on node2, node2 rebooted. > >> stonith_admin --reboot node1 > >> > >> What have I missed or done wrong? > >> > >> > >> Thank you, > >> > >> Durwin F. De La Rue > >> Management Sciences, Inc. > >> 6022 Constitution Ave. NE > >> Albuquerque, NM 87110 > >> Phone (505) 255-8611 > > > > ___ > > Users mailing list: Users@clusterlabs.org > > http://lists.clusterlabs.org/mailman/listinfo/users > > > > Project Home: http://www.clusterlabs.org > > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > > Bugs: http://bugs.clusterlabs.org > > > > __
Re: [ClusterLabs] I question whether STONITH is working.
Klaus Wenninger wrote on 02/16/2017 10:43:19 AM: > From: Klaus Wenninger > To: dur...@mgtsciences.com, Cluster Labs - All topics related to > open-source clustering welcomed > Cc: kgail...@redhat.com > Date: 02/16/2017 10:43 AM > Subject: Re: [ClusterLabs] I question whether STONITH is working. > > On 02/16/2017 05:42 PM, dur...@mgtsciences.com wrote: > Klaus Wenninger wrote on 02/16/2017 03:27:07 AM: > > > From: Klaus Wenninger > > To: kgail...@redhat.com, Cluster Labs - All topics related to open- > > source clustering welcomed > > Date: 02/16/2017 03:27 AM > > Subject: Re: [ClusterLabs] I question whether STONITH is working. > > > > On 02/15/2017 10:30 PM, Ken Gaillot wrote: > > > On 02/15/2017 12:17 PM, dur...@mgtsciences.com wrote: > > >> I have 2 Fedora VMs (node1, and node2) running on a Windows 10 machine > > >> using Virtualbox. > > >> > > >> I began with this. > > >> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/ > > Clusters_from_Scratch/ > > >> > > >> > > >> When it came to fencing, I refered to this. > > >> http://www.linux-ha.org/wiki/SBD_Fencing > > >> > > >> To the file /etc/sysconfig/sbd I added these lines. > > >> SBD_OPTS="-W" > > >> SBD_DEVICE="/dev/sdb1" > > >> I added 'modprobe softdog' to rc.local > > >> > > >> After getting sbd working, I resumed with Clusters from Scratch, chapter > > >> 8.3. > > >> I executed these commands *only* one node1. Am I suppose to run any of > > >> these commands on other nodes? 'Clusters from Scratch' does not specify. > > > Configuration commands only need to be run once. The cluster > > > synchronizes all changes across the cluster. > > > > > >> pcs cluster cib stonith_cfg > > >> pcs -f stonith_cfg stonith create sbd-fence fence_sbd > > >> devices="/dev/sdb1" port="node2" > > > The above command creates a fence device configured to kill node2 -- but > > > it doesn't tell the cluster which nodes the device can be used to kill. > > > Thus, even if you try to fence node1, it will use this device, and node2 > > > will be shot. > > > > > > The pcmk_host_list parameter specifies which nodes the device can kill. > > > If not specified, the device will be used to kill any node. So, just add > > > pcmk_host_list=node2 here. > > > > > > You'll need to configure a separate device to fence node1. > > > > > > I haven't used fence_sbd, so I don't know if there's a way to configure > > > it as one device that can kill both nodes. > > > > fence_sbd should return a proper dynamic-list. > > So without ports and host-list it should just work fine. > > Not even a host-map should be needed. Or actually it is not > > supported because if sbd is using different node-naming than > > pacemaker, pacemaker-watcher within sbd is gonna fail. > > I am not clear on what you are conveying. On the command > 'pcs -f stonith_cfg stonith create' I do not need the port= option? > > e.g. 'pcs stonith create FenceSBD fence_sbd devices="/dev/vdb"' > should do the whole trick. Thank you. Since I already executed this command, executing it again without device= says device already exists. What is correct way to remove current device so I can create it again without device=? Durwin > > > Ken stated I need an sbd device for each node in the cluster > (needing fencing). > I assume each node is a possible failure and would need fencing. > So what *is* a slot? SBD device allocates 255 slots in each device. > These slots are not to keep track of the nodes? > > There is a slot for each node - and if the sbd-instance doesn't find > one matching > its own name it creates one (paints one of the 255 that is unused > with its own name). > The slots are used to send messages to the sbd-instances on the nodes. > > > Regarding fence_sbd returning dynamic-list. The command > 'sbd -d /dev/sdb1 list' returns every node in the cluster. > Is this the list you are referring to? > > Yes and no. fence_sbd - fence-agent is using the same command to create that > list when it is asked by pacemaker which nodes it is able to fence. > So you don't have to hardcode that, although you can of course using a > host-map if you don't want sbd-fencing to be used for certain nodes because > you might have a be
Re: [ClusterLabs] I question whether STONITH is working.
On 02/16/2017 05:42 PM, dur...@mgtsciences.com wrote: > Klaus Wenninger wrote on 02/16/2017 03:27:07 AM: > > > From: Klaus Wenninger > > To: kgail...@redhat.com, Cluster Labs - All topics related to open- > > source clustering welcomed > > Date: 02/16/2017 03:27 AM > > Subject: Re: [ClusterLabs] I question whether STONITH is working. > > > > On 02/15/2017 10:30 PM, Ken Gaillot wrote: > > > On 02/15/2017 12:17 PM, dur...@mgtsciences.com wrote: > > >> I have 2 Fedora VMs (node1, and node2) running on a Windows 10 > machine > > >> using Virtualbox. > > >> > > >> I began with this. > > >> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/ > > Clusters_from_Scratch/ > > >> > > >> > > >> When it came to fencing, I refered to this. > > >> http://www.linux-ha.org/wiki/SBD_Fencing > > >> > > >> To the file /etc/sysconfig/sbd I added these lines. > > >> SBD_OPTS="-W" > > >> SBD_DEVICE="/dev/sdb1" > > >> I added 'modprobe softdog' to rc.local > > >> > > >> After getting sbd working, I resumed with Clusters from Scratch, > chapter > > >> 8.3. > > >> I executed these commands *only* one node1. Am I suppose to run > any of > > >> these commands on other nodes? 'Clusters from Scratch' does not > specify. > > > Configuration commands only need to be run once. The cluster > > > synchronizes all changes across the cluster. > > > > > >> pcs cluster cib stonith_cfg > > >> pcs -f stonith_cfg stonith create sbd-fence fence_sbd > > >> devices="/dev/sdb1" port="node2" > > > The above command creates a fence device configured to kill node2 > -- but > > > it doesn't tell the cluster which nodes the device can be used to > kill. > > > Thus, even if you try to fence node1, it will use this device, and > node2 > > > will be shot. > > > > > > The pcmk_host_list parameter specifies which nodes the device can > kill. > > > If not specified, the device will be used to kill any node. So, > just add > > > pcmk_host_list=node2 here. > > > > > > You'll need to configure a separate device to fence node1. > > > > > > I haven't used fence_sbd, so I don't know if there's a way to > configure > > > it as one device that can kill both nodes. > > > > fence_sbd should return a proper dynamic-list. > > So without ports and host-list it should just work fine. > > Not even a host-map should be needed. Or actually it is not > > supported because if sbd is using different node-naming than > > pacemaker, pacemaker-watcher within sbd is gonna fail. > > I am not clear on what you are conveying. On the command > 'pcs -f stonith_cfg stonith create' I do not need the port= option? e.g. 'pcs stonith create FenceSBD fence_sbd devices="/dev/vdb"' should do the whole trick. > > > Ken stated I need an sbd device for each node in the cluster (needing > fencing). > I assume each node is a possible failure and would need fencing. > So what *is* a slot? SBD device allocates 255 slots in each device. > These slots are not to keep track of the nodes? There is a slot for each node - and if the sbd-instance doesn't find one matching its own name it creates one (paints one of the 255 that is unused with its own name). The slots are used to send messages to the sbd-instances on the nodes. > > > Regarding fence_sbd returning dynamic-list. The command > 'sbd -d /dev/sdb1 list' returns every node in the cluster. > Is this the list you are referring to? Yes and no. fence_sbd - fence-agent is using the same command to create that list when it is asked by pacemaker which nodes it is able to fence. So you don't have to hardcode that, although you can of course using a host-map if you don't want sbd-fencing to be used for certain nodes because you might have a better fencing device (can be solved using fencing-levels as well). > > > Thank you, > > Durwin > > > > > > > > >> pcs -f stonith_cfg property set stonith-enabled=true > > >> pcs cluster cib-push stonith_cfg > > >> > > >> I then tried this command from node1. > > >> stonith_admin --reboot node2 > > >> > > >> Node2 did not reboot or even shutdown. the command 'sbd -d /dev/sdb1 > > >> list' showed node2 as off, but I was still logged int
Re: [ClusterLabs] I question whether STONITH is working.
Klaus Wenninger wrote on 02/16/2017 03:27:07 AM: > From: Klaus Wenninger > To: kgail...@redhat.com, Cluster Labs - All topics related to open- > source clustering welcomed > Date: 02/16/2017 03:27 AM > Subject: Re: [ClusterLabs] I question whether STONITH is working. > > On 02/15/2017 10:30 PM, Ken Gaillot wrote: > > On 02/15/2017 12:17 PM, dur...@mgtsciences.com wrote: > >> I have 2 Fedora VMs (node1, and node2) running on a Windows 10 machine > >> using Virtualbox. > >> > >> I began with this. > >> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/ > Clusters_from_Scratch/ > >> > >> > >> When it came to fencing, I refered to this. > >> http://www.linux-ha.org/wiki/SBD_Fencing > >> > >> To the file /etc/sysconfig/sbd I added these lines. > >> SBD_OPTS="-W" > >> SBD_DEVICE="/dev/sdb1" > >> I added 'modprobe softdog' to rc.local > >> > >> After getting sbd working, I resumed with Clusters from Scratch, chapter > >> 8.3. > >> I executed these commands *only* one node1. Am I suppose to run any of > >> these commands on other nodes? 'Clusters from Scratch' does not specify. > > Configuration commands only need to be run once. The cluster > > synchronizes all changes across the cluster. > > > >> pcs cluster cib stonith_cfg > >> pcs -f stonith_cfg stonith create sbd-fence fence_sbd > >> devices="/dev/sdb1" port="node2" > > The above command creates a fence device configured to kill node2 -- but > > it doesn't tell the cluster which nodes the device can be used to kill. > > Thus, even if you try to fence node1, it will use this device, and node2 > > will be shot. > > > > The pcmk_host_list parameter specifies which nodes the device can kill. > > If not specified, the device will be used to kill any node. So, just add > > pcmk_host_list=node2 here. > > > > You'll need to configure a separate device to fence node1. > > > > I haven't used fence_sbd, so I don't know if there's a way to configure > > it as one device that can kill both nodes. > > fence_sbd should return a proper dynamic-list. > So without ports and host-list it should just work fine. > Not even a host-map should be needed. Or actually it is not > supported because if sbd is using different node-naming than > pacemaker, pacemaker-watcher within sbd is gonna fail. I am not clear on what you are conveying. On the command 'pcs -f stonith_cfg stonith create' I do not need the port= option? Ken stated I need an sbd device for each node in the cluster (needing fencing). I assume each node is a possible failure and would need fencing. So what *is* a slot? SBD device allocates 255 slots in each device. These slots are not to keep track of the nodes? Regarding fence_sbd returning dynamic-list. The command 'sbd -d /dev/sdb1 list' returns every node in the cluster. Is this the list you are referring to? Thank you, Durwin > > > > >> pcs -f stonith_cfg property set stonith-enabled=true > >> pcs cluster cib-push stonith_cfg > >> > >> I then tried this command from node1. > >> stonith_admin --reboot node2 > >> > >> Node2 did not reboot or even shutdown. the command 'sbd -d /dev/sdb1 > >> list' showed node2 as off, but I was still logged into it (cluster > >> status on node2 showed not running). > >> > >> I rebooted and ran this command on node 2 and started cluster. > >> sbd -d /dev/sdb1 message node2 clear > >> > >> If I ran this command on node2, node2 rebooted. > >> stonith_admin --reboot node1 > >> > >> What have I missed or done wrong? > >> > >> > >> Thank you, > >> > >> Durwin F. De La Rue > >> Management Sciences, Inc. > >> 6022 Constitution Ave. NE > >> Albuquerque, NM 87110 > >> Phone (505) 255-8611 > > > > ___ > > Users mailing list: Users@clusterlabs.org > > http://lists.clusterlabs.org/mailman/listinfo/users > > > > Project Home: http://www.clusterlabs.org > > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > > Bugs: http://bugs.clusterlabs.org > > > > ___ > Users mailing list: Users@clusterlabs.org > http://lists.clusterlabs.org/mailman/listinfo/users > > Project Home: http://www.clusterlabs.org > Getting started: http://www.cl
Re: [ClusterLabs] I question whether STONITH is working.
On 02/15/2017 10:30 PM, Ken Gaillot wrote: > On 02/15/2017 12:17 PM, dur...@mgtsciences.com wrote: >> I have 2 Fedora VMs (node1, and node2) running on a Windows 10 machine >> using Virtualbox. >> >> I began with this. >> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Clusters_from_Scratch/ >> >> >> When it came to fencing, I refered to this. >> http://www.linux-ha.org/wiki/SBD_Fencing >> >> To the file /etc/sysconfig/sbd I added these lines. >> SBD_OPTS="-W" >> SBD_DEVICE="/dev/sdb1" >> I added 'modprobe softdog' to rc.local >> >> After getting sbd working, I resumed with Clusters from Scratch, chapter >> 8.3. >> I executed these commands *only* one node1. Am I suppose to run any of >> these commands on other nodes? 'Clusters from Scratch' does not specify. > Configuration commands only need to be run once. The cluster > synchronizes all changes across the cluster. > >> pcs cluster cib stonith_cfg >> pcs -f stonith_cfg stonith create sbd-fence fence_sbd >> devices="/dev/sdb1" port="node2" > The above command creates a fence device configured to kill node2 -- but > it doesn't tell the cluster which nodes the device can be used to kill. > Thus, even if you try to fence node1, it will use this device, and node2 > will be shot. > > The pcmk_host_list parameter specifies which nodes the device can kill. > If not specified, the device will be used to kill any node. So, just add > pcmk_host_list=node2 here. > > You'll need to configure a separate device to fence node1. > > I haven't used fence_sbd, so I don't know if there's a way to configure > it as one device that can kill both nodes. fence_sbd should return a proper dynamic-list. So without ports and host-list it should just work fine. Not even a host-map should be needed. Or actually it is not supported because if sbd is using different node-naming than pacemaker, pacemaker-watcher within sbd is gonna fail. > >> pcs -f stonith_cfg property set stonith-enabled=true >> pcs cluster cib-push stonith_cfg >> >> I then tried this command from node1. >> stonith_admin --reboot node2 >> >> Node2 did not reboot or even shutdown. the command 'sbd -d /dev/sdb1 >> list' showed node2 as off, but I was still logged into it (cluster >> status on node2 showed not running). >> >> I rebooted and ran this command on node 2 and started cluster. >> sbd -d /dev/sdb1 message node2 clear >> >> If I ran this command on node2, node2 rebooted. >> stonith_admin --reboot node1 >> >> What have I missed or done wrong? >> >> >> Thank you, >> >> Durwin F. De La Rue >> Management Sciences, Inc. >> 6022 Constitution Ave. NE >> Albuquerque, NM 87110 >> Phone (505) 255-8611 > > ___ > Users mailing list: Users@clusterlabs.org > http://lists.clusterlabs.org/mailman/listinfo/users > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org ___ Users mailing list: Users@clusterlabs.org http://lists.clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] I question whether STONITH is working.
On 02/15/2017 12:17 PM, dur...@mgtsciences.com wrote: > I have 2 Fedora VMs (node1, and node2) running on a Windows 10 machine > using Virtualbox. > > I began with this. > http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Clusters_from_Scratch/ > > > When it came to fencing, I refered to this. > http://www.linux-ha.org/wiki/SBD_Fencing > > To the file /etc/sysconfig/sbd I added these lines. > SBD_OPTS="-W" > SBD_DEVICE="/dev/sdb1" > I added 'modprobe softdog' to rc.local > > After getting sbd working, I resumed with Clusters from Scratch, chapter > 8.3. > I executed these commands *only* one node1. Am I suppose to run any of > these commands on other nodes? 'Clusters from Scratch' does not specify. Configuration commands only need to be run once. The cluster synchronizes all changes across the cluster. > pcs cluster cib stonith_cfg > pcs -f stonith_cfg stonith create sbd-fence fence_sbd > devices="/dev/sdb1" port="node2" The above command creates a fence device configured to kill node2 -- but it doesn't tell the cluster which nodes the device can be used to kill. Thus, even if you try to fence node1, it will use this device, and node2 will be shot. The pcmk_host_list parameter specifies which nodes the device can kill. If not specified, the device will be used to kill any node. So, just add pcmk_host_list=node2 here. You'll need to configure a separate device to fence node1. I haven't used fence_sbd, so I don't know if there's a way to configure it as one device that can kill both nodes. > pcs -f stonith_cfg property set stonith-enabled=true > pcs cluster cib-push stonith_cfg > > I then tried this command from node1. > stonith_admin --reboot node2 > > Node2 did not reboot or even shutdown. the command 'sbd -d /dev/sdb1 > list' showed node2 as off, but I was still logged into it (cluster > status on node2 showed not running). > > I rebooted and ran this command on node 2 and started cluster. > sbd -d /dev/sdb1 message node2 clear > > If I ran this command on node2, node2 rebooted. > stonith_admin --reboot node1 > > What have I missed or done wrong? > > > Thank you, > > Durwin F. De La Rue > Management Sciences, Inc. > 6022 Constitution Ave. NE > Albuquerque, NM 87110 > Phone (505) 255-8611 ___ Users mailing list: Users@clusterlabs.org http://lists.clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org