Re: [ClusterLabs] Fence agent for VirtualBox

2017-02-23 Thread durwin
Klaus Wenninger <kwenn...@redhat.com> wrote on 02/23/2017 01:12:19 AM:

> From: Klaus Wenninger <kwenn...@redhat.com>
> To: Cluster Labs - All topics related to open-source clustering 
> welcomed <users@clusterlabs.org>
> Date: 02/23/2017 01:13 AM
> Subject: Re: [ClusterLabs] Fence agent for VirtualBox
> 
> On 02/23/2017 07:48 AM, Marek Grac wrote:
> > Hi,
> >
> > we have added support for a host with Windows but it is not trivial to
> > setup because of various contexts/privileges. 
> >
> > Install openssh on Windows (tutorial can be found on 
> > http://linuxbsdos.com/2015/07/30/how-to-install-openssh-on-windows-10/
)
> >
> > There is a major issue with current setup in Windows.  You have to
> > start virtual machines from openssh connection if you wish to manage
> > them from openssh connection.
> >
> 
> Have read about similar issues with openssh on Windows for other 
use-cases
> and that other ssh-implementations for Windows seem to do better / more
> userfriendly.

Any idea how Cygwin sshd would work?  I use Cygwin for everything and
already have sshd running.

Thank you,

Durwin

> 
> No personal experience from my side but maybe others on the list ...
> 
> Don't get me wrong - not speaking against free software - but on top of
> a non-free OS
> it makes less difference I guess.
> 
> > So, you have to connect from Windows to very same Windows using ssh
> > and then run 
> >
> > “/Program Files/Oracle/VirtualBox/VBoxManage.exe” start NAME_OF_VM
> >
> > Be prepared that you will not see that your machine VM is running in
> > VirtualBox
> > management UI.
> >
> > Afterwards it is enough to add parameter --host-os windows (or
> > host_os=windows when stdin/pcs is used).
> >
> > m,
> >
> > On Wed, Feb 22, 2017 at 11:49 AM, Marek Grac <mg...@redhat.com
> > <mailto:mg...@redhat.com>> wrote:
> >
> > Hi,
> >
> > I have updated fence agent for Virtual Box (upstream git). The
> > main benefit is new option --host-os (host_os on stdin) that
> > supports linux|macos. So if your host is linux/macos all you need
> > to set is this option (and ssh access to a machine). I would love
> > to add a support also for windows but I'm not able to run
> > vboxmanage.exe over the openssh. It works perfectly from command
> > prompt under same user, so there are some privileges issues, if
> > you know how to fix this please let me know.
> >
> > m,
> >
> >
> >
> >
> > ___
> > Users mailing list: Users@clusterlabs.org
> > http://lists.clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: 
http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> 
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org



This email message and any attachments are for the sole use of the 
intended recipient(s) and may contain proprietary and/or confidential 
information which may be privileged or otherwise protected from 
disclosure. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient(s), please contact the 
sender by reply email and destroy the original message and any copies of 
the message as well as any attachments to the original message.
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] I question whether STONITH is working.

2017-02-20 Thread durwin
Klaus Wenninger <kwenn...@redhat.com> wrote on 02/16/2017 03:27:07 AM:

> From: Klaus Wenninger <kwenn...@redhat.com>
> To: kgail...@redhat.com, Cluster Labs - All topics related to open-
> source clustering welcomed <users@clusterlabs.org>
> Date: 02/16/2017 03:27 AM
> Subject: Re: [ClusterLabs] I question whether STONITH is working.
> 
> On 02/15/2017 10:30 PM, Ken Gaillot wrote:
> > On 02/15/2017 12:17 PM, dur...@mgtsciences.com wrote:
> >> I have 2 Fedora VMs (node1, and node2) running on a Windows 10 
machine
> >> using Virtualbox.
> >>
> >> I began with this.
> >> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/
> Clusters_from_Scratch/
> >>
> >>
> >> When it came to fencing, I refered to this.
> >> http://www.linux-ha.org/wiki/SBD_Fencing
> >>
> >> To the file /etc/sysconfig/sbd I added these lines.
> >> SBD_OPTS="-W"
> >> SBD_DEVICE="/dev/sdb1"
> >> I added 'modprobe softdog' to rc.local
> >>
> >> After getting sbd working, I resumed with Clusters from Scratch, 
chapter
> >> 8.3.
> >> I executed these commands *only* one node1.  Am I suppose to run any 
of
> >> these commands on other nodes? 'Clusters from Scratch' does not 
specify.
> > Configuration commands only need to be run once. The cluster
> > synchronizes all changes across the cluster.
> >
> >> pcs cluster cib stonith_cfg
> >> pcs -f stonith_cfg stonith create sbd-fence fence_sbd
> >> devices="/dev/sdb1" port="node2"
> > The above command creates a fence device configured to kill node2 -- 
but
> > it doesn't tell the cluster which nodes the device can be used to 
kill.
> > Thus, even if you try to fence node1, it will use this device, and 
node2
> > will be shot.
> >
> > The pcmk_host_list parameter specifies which nodes the device can 
kill.
> > If not specified, the device will be used to kill any node. So, just 
add
> > pcmk_host_list=node2 here.
> >
> > You'll need to configure a separate device to fence node1.
> >
> > I haven't used fence_sbd, so I don't know if there's a way to 
configure
> > it as one device that can kill both nodes.
> 
> fence_sbd should return a proper dynamic-list.
> So without ports and host-list it should just work fine.
> Not even a host-map should be needed. Or actually it is not
> supported because if sbd is using different node-naming than
> pacemaker, pacemaker-watcher within sbd is gonna fail.

It was said that 'port=' is not needed, that if I used the command below
it would just work (as I understood what was being said).  So I deleted
using this command.
pcs -f stonith_cfg stonith delete sbd-fence

Recreated without 'port='.
pcs -f stonith_cfg stonith create sbd-fence fence_sbd devices="/dev/sdb1"
pcs cluster cib-push stonith_cfg

>From node2 I executed this command.
stonith_admin --reboot node1

But node2 rebooted anyway.

If I follow what Ken shared, I would need another 'watchdog' in addition
to another sbd device.  Are multiple watchdogs possible?

I am lost at this point.

I have 2 VM nodes running Fedora25 on a Windows 10 host.  Every
node in a cluster needs to be fenced (as I understand it).  Using
SBD, what is the correct way to proceed?

Thank you,

Durwin

> 
> >
> >> pcs -f stonith_cfg property set stonith-enabled=true
> >> pcs cluster cib-push stonith_cfg
> >>
> >> I then tried this command from node1.
> >> stonith_admin --reboot node2
> >>
> >> Node2 did not reboot or even shutdown. the command 'sbd -d /dev/sdb1
> >> list' showed node2 as off, but I was still logged into it (cluster
> >> status on node2 showed not running).
> >>
> >> I rebooted and ran this command on node 2 and started cluster.
> >> sbd -d /dev/sdb1 message node2 clear
> >>
> >> If I ran this command on node2, node2 rebooted.
> >> stonith_admin --reboot node1
> >>
> >> What have I missed or done wrong?
> >>
> >>
> >> Thank you,
> >>
> >> Durwin F. De La Rue
> >> Management Sciences, Inc.
> >> 6022 Constitution Ave. NE
> >> Albuquerque, NM  87110
> >> Phone (505) 255-8611
> >
> > ___
> > Users mailing list: Users@clusterlabs.org
> > http://lists.clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: 
http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> 
> 
> 
> ___

Re: [ClusterLabs] I question whether STONITH is working.

2017-02-16 Thread durwin
Klaus Wenninger <kwenn...@redhat.com> wrote on 02/16/2017 10:43:19 AM:

> From: Klaus Wenninger <kwenn...@redhat.com>
> To: dur...@mgtsciences.com, Cluster Labs - All topics related to 
> open-source clustering welcomed <users@clusterlabs.org>
> Cc: kgail...@redhat.com
> Date: 02/16/2017 10:43 AM
> Subject: Re: [ClusterLabs] I question whether STONITH is working.
> 
> On 02/16/2017 05:42 PM, dur...@mgtsciences.com wrote:
> Klaus Wenninger <kwenn...@redhat.com> wrote on 02/16/2017 03:27:07 AM:
> 
> > From: Klaus Wenninger <kwenn...@redhat.com> 
> > To: kgail...@redhat.com, Cluster Labs - All topics related to open-
> > source clustering welcomed <users@clusterlabs.org> 
> > Date: 02/16/2017 03:27 AM 
> > Subject: Re: [ClusterLabs] I question whether STONITH is working. 
> > 
> > On 02/15/2017 10:30 PM, Ken Gaillot wrote:
> > > On 02/15/2017 12:17 PM, dur...@mgtsciences.com wrote:
> > >> I have 2 Fedora VMs (node1, and node2) running on a Windows 10 
machine
> > >> using Virtualbox.
> > >>
> > >> I began with this.
> > >> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/
> > Clusters_from_Scratch/
> > >>
> > >>
> > >> When it came to fencing, I refered to this.
> > >> http://www.linux-ha.org/wiki/SBD_Fencing
> > >>
> > >> To the file /etc/sysconfig/sbd I added these lines.
> > >> SBD_OPTS="-W"
> > >> SBD_DEVICE="/dev/sdb1"
> > >> I added 'modprobe softdog' to rc.local
> > >>
> > >> After getting sbd working, I resumed with Clusters from Scratch, 
chapter
> > >> 8.3.
> > >> I executed these commands *only* one node1.  Am I suppose to run 
any of
> > >> these commands on other nodes? 'Clusters from Scratch' does not 
specify.
> > > Configuration commands only need to be run once. The cluster
> > > synchronizes all changes across the cluster.
> > >
> > >> pcs cluster cib stonith_cfg
> > >> pcs -f stonith_cfg stonith create sbd-fence fence_sbd
> > >> devices="/dev/sdb1" port="node2"
> > > The above command creates a fence device configured to kill node2 -- 
but
> > > it doesn't tell the cluster which nodes the device can be used to 
kill.
> > > Thus, even if you try to fence node1, it will use this device, and 
node2
> > > will be shot.
> > >
> > > The pcmk_host_list parameter specifies which nodes the device can 
kill.
> > > If not specified, the device will be used to kill any node. So, just 
add
> > > pcmk_host_list=node2 here.
> > >
> > > You'll need to configure a separate device to fence node1.
> > >
> > > I haven't used fence_sbd, so I don't know if there's a way to 
configure
> > > it as one device that can kill both nodes.
> > 
> > fence_sbd should return a proper dynamic-list.
> > So without ports and host-list it should just work fine.
> > Not even a host-map should be needed. Or actually it is not
> > supported because if sbd is using different node-naming than
> > pacemaker, pacemaker-watcher within sbd is gonna fail. 
> 
> I am not clear on what you are conveying.  On the command 
> 'pcs -f stonith_cfg stonith create' I do not need the port= option?
> 
> e.g. 'pcs stonith create FenceSBD fence_sbd devices="/dev/vdb"'
> should do the whole trick.

Thank you.  Since I already executed this command, executing it again
without device= says device already exists.  What is correct way to
remove current device so I can create it again without device=?

Durwin

> 
> 
> Ken stated I need an sbd device for each node in the cluster 
> (needing fencing). 
> I assume each node is a possible failure and would need fencing. 
> So what *is* a slot?  SBD device allocates 255 slots in each device. 
> These slots are not to keep track of the nodes?
> 
> There is a slot for each node - and if the sbd-instance doesn't find
> one matching
> its own name it creates one (paints one of the 255 that is unused 
> with its own name).
> The slots are used to send messages to the sbd-instances on the nodes.

> 
> 
> Regarding fence_sbd returning dynamic-list.  The command 
> 'sbd -d /dev/sdb1 list' returns every node in the cluster. 
> Is this the list you are referring to?
> 
> Yes and no. fence_sbd - fence-agent is using the same command to create 
that
> list when it is asked by pacemaker which nodes it is able to fence.
> So you don't have to hardcode that, although you can of course using a
> host-map if you don't

Re: [ClusterLabs] I question whether STONITH is working.

2017-02-16 Thread durwin
Klaus Wenninger <kwenn...@redhat.com> wrote on 02/16/2017 03:27:07 AM:

> From: Klaus Wenninger <kwenn...@redhat.com>
> To: kgail...@redhat.com, Cluster Labs - All topics related to open-
> source clustering welcomed <users@clusterlabs.org>
> Date: 02/16/2017 03:27 AM
> Subject: Re: [ClusterLabs] I question whether STONITH is working.
> 
> On 02/15/2017 10:30 PM, Ken Gaillot wrote:
> > On 02/15/2017 12:17 PM, dur...@mgtsciences.com wrote:
> >> I have 2 Fedora VMs (node1, and node2) running on a Windows 10 
machine
> >> using Virtualbox.
> >>
> >> I began with this.
> >> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/
> Clusters_from_Scratch/
> >>
> >>
> >> When it came to fencing, I refered to this.
> >> http://www.linux-ha.org/wiki/SBD_Fencing
> >>
> >> To the file /etc/sysconfig/sbd I added these lines.
> >> SBD_OPTS="-W"
> >> SBD_DEVICE="/dev/sdb1"
> >> I added 'modprobe softdog' to rc.local
> >>
> >> After getting sbd working, I resumed with Clusters from Scratch, 
chapter
> >> 8.3.
> >> I executed these commands *only* one node1.  Am I suppose to run any 
of
> >> these commands on other nodes? 'Clusters from Scratch' does not 
specify.
> > Configuration commands only need to be run once. The cluster
> > synchronizes all changes across the cluster.
> >
> >> pcs cluster cib stonith_cfg
> >> pcs -f stonith_cfg stonith create sbd-fence fence_sbd
> >> devices="/dev/sdb1" port="node2"
> > The above command creates a fence device configured to kill node2 -- 
but
> > it doesn't tell the cluster which nodes the device can be used to 
kill.
> > Thus, even if you try to fence node1, it will use this device, and 
node2
> > will be shot.
> >
> > The pcmk_host_list parameter specifies which nodes the device can 
kill.
> > If not specified, the device will be used to kill any node. So, just 
add
> > pcmk_host_list=node2 here.
> >
> > You'll need to configure a separate device to fence node1.
> >
> > I haven't used fence_sbd, so I don't know if there's a way to 
configure
> > it as one device that can kill both nodes.
> 
> fence_sbd should return a proper dynamic-list.
> So without ports and host-list it should just work fine.
> Not even a host-map should be needed. Or actually it is not
> supported because if sbd is using different node-naming than
> pacemaker, pacemaker-watcher within sbd is gonna fail.

I am not clear on what you are conveying.  On the command
'pcs -f stonith_cfg stonith create' I do not need the port= option?

Ken stated I need an sbd device for each node in the cluster (needing 
fencing).
I assume each node is a possible failure and would need fencing.
So what *is* a slot?  SBD device allocates 255 slots in each device.
These slots are not to keep track of the nodes?

Regarding fence_sbd returning dynamic-list.  The command
'sbd -d /dev/sdb1 list' returns every node in the cluster.
Is this the list you are referring to?

Thank you,

Durwin

> 
> >
> >> pcs -f stonith_cfg property set stonith-enabled=true
> >> pcs cluster cib-push stonith_cfg
> >>
> >> I then tried this command from node1.
> >> stonith_admin --reboot node2
> >>
> >> Node2 did not reboot or even shutdown. the command 'sbd -d /dev/sdb1
> >> list' showed node2 as off, but I was still logged into it (cluster
> >> status on node2 showed not running).
> >>
> >> I rebooted and ran this command on node 2 and started cluster.
> >> sbd -d /dev/sdb1 message node2 clear
> >>
> >> If I ran this command on node2, node2 rebooted.
> >> stonith_admin --reboot node1
> >>
> >> What have I missed or done wrong?
> >>
> >>
> >> Thank you,
> >>
> >> Durwin F. De La Rue
> >> Management Sciences, Inc.
> >> 6022 Constitution Ave. NE
> >> Albuquerque, NM  87110
> >> Phone (505) 255-8611
> >
> > ___
> > Users mailing list: Users@clusterlabs.org
> > http://lists.clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: 
http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> 
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http:/

[ClusterLabs] I question whether STONITH is working.

2017-02-15 Thread durwin
I have 2 Fedora VMs (node1, and node2) running on a Windows 10 machine 
using Virtualbox.

I began with this.
http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Clusters_from_Scratch/

When it came to fencing, I refered to this.
http://www.linux-ha.org/wiki/SBD_Fencing

To the file /etc/sysconfig/sbd I added these lines.
SBD_OPTS="-W"
SBD_DEVICE="/dev/sdb1"
I added 'modprobe softdog' to rc.local

After getting sbd working, I resumed with Clusters from Scratch, chapter 
8.3.
I executed these commands *only* one node1.  Am I suppose to run any of 
these commands on other nodes? 'Clusters from Scratch' does not specify.
pcs cluster cib stonith_cfg
pcs -f stonith_cfg stonith create sbd-fence fence_sbd devices="/dev/sdb1" 
port="node2"
pcs -f stonith_cfg property set stonith-enabled=true
pcs cluster cib-push stonith_cfg

I then tried this command from node1.
stonith_admin --reboot node2

Node2 did not reboot or even shutdown. the command 'sbd -d /dev/sdb1 list' 
showed node2 as off, but I was still logged into it (cluster status on 
node2 showed not running).

I rebooted and ran this command on node 2 and started cluster.
sbd -d /dev/sdb1 message node2 clear

If I ran this command on node2, node2 rebooted.
stonith_admin --reboot node1

What have I missed or done wrong?


Thank you,

Durwin F. De La Rue
Management Sciences, Inc.
6022 Constitution Ave. NE
Albuquerque, NM  87110
Phone (505) 255-8611


This email message and any attachments are for the sole use of the 
intended recipient(s) and may contain proprietary and/or confidential 
information which may be privileged or otherwise protected from 
disclosure. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient(s), please contact the 
sender by reply email and destroy the original message and any copies of 
the message as well as any attachments to the original message.___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] SBD with shared block storage (and watchdog?)

2017-02-13 Thread durwin
emmanuel segura <emi2f...@gmail.com> wrote on 02/13/2017 10:55:58 AM:

> From: emmanuel segura <emi2f...@gmail.com>
> To: Cluster Labs - All topics related to open-source clustering 
> welcomed <users@clusterlabs.org>
> Date: 02/13/2017 10:56 AM
> Subject: Re: [ClusterLabs] SBD with shared block storage (and watchdog?)
> 
> modprobe softdog if you don't have an external watchdog

Thank you, that made sbd watch happy.

I now have this running on the 2 nodes.

11:10 AM root@node1 ~
fc25> ps aux|grep sbd
root 24426  0.0  0.6  97888 13848 pts/0SL   11:00   0:00 sbd: 
inquisitor
root 24427  0.0  0.6  97892 13988 pts/0SL   11:00   0:00 sbd: 
watcher: /dev/sdb1 - slot: 0 - uuid: 6094f0f4-2a07-47db-b4f7-6d478464d56a
root 24428  0.0  0.8 102476 18404 pts/0SL   11:00   0:00 sbd: 
watcher: Pacemaker
root 29442  0.0  0.0 118520  1000 pts/0S+   11:18   0:00 grep 
--color=auto sbd

11:18 AM root@node2 ~
fc25> ps aux|grep sbd
root 22784  0.0  0.6  97884 13844 pts/0SL   11:18   0:00 sbd: 
inquisitor
root 22785  0.0  0.6  97888 13984 pts/0SL   11:18   0:00 sbd: 
watcher: /dev/sdb1 - slot: 1 - uuid: 6094f0f4-2a07-47db-b4f7-6d478464d56a
root 22786  0.0  0.8 102472 18400 pts/0SL   11:18   0:00 sbd: 
watcher: Pacemaker
root 22789  0.0  0.0 118520   952 pts/0S+   11:18   0:00 grep 
--color=auto sbd

Is the fencing complete?
If so, will 'pcs cluster standby' simulate node failure?


Addressing email that has followed.

The device sbd1 is solely for sbd.  Sounds like you're saying it does not 
get mounted.
Is that correct?  If I just unmount sbd1 will all be ok?
How small can I make the sbd block device?

Thank you,

Durwin


> 
> 2017-02-13 18:34 GMT+01:00  <dur...@mgtsciences.com>:
> > I am working to get an active/active cluster running.
> > I have Windows 10 running 2 Fedora 25 Virtualbox VMs.
> > VMs named node1, and node2.
> >
> > I created a vdi disk and set it to shared.
> > I formatted it to gfs2 with this command.
> >
> > mkfs.gfs2 -t msicluster:msigfs2 -j 2 /dev/sdb1
> >
> > After installing 'dlm' and insuring guest additions were
> > installed, I was able to mount the gfs2 parition.
> >
> > I then followed.
> >
> > https://github.com/l-mb/sbd/blob/master/man/sbd.8.pod
> >
> > I used this command.
> >
> > sbd -d /dev/sdb1 create
> >
> > Using sbd to 'list' returns nothing, but 'dump' shows this.
> >
> > fc25> sbd -d /dev/sdb1 dump
> > ==Dumping header on disk /dev/sdb1
> > Header version : 2.1
> > UUID   : 6094f0f4-2a07-47db-b4f7-6d478464d56a
> > Number of slots: 255
> > Sector size: 512
> > Timeout (watchdog) : 5
> > Timeout (allocate) : 2
> > Timeout (loop) : 1
> > Timeout (msgwait)  : 10
> > ==Header on disk /dev/sdb1 is dumped
> >
> > I then tried the 'watch' command and journalctl shows error listed.
> >
> > sbd -d /dev/sdb1 -W -P watch
> >
> > Feb 13 09:54:09 node1 sbd[6908]:error: watchdog_init: Cannot open
> > watchdog device '/dev/watchdog': No such file or directory (2)
> > Feb 13 09:54:09 node1 sbd[6908]:  warning: cleanup_servant_by_pid: 
Servant
> > for pcmk (pid: 6910) has terminated
> > Feb 13 09:54:09 node1 sbd[6908]:  warning: cleanup_servant_by_pid: 
Servant
> > for /dev/sdb1 (pid: 6909) has terminated
> >
> >
> > From
> >
> > http://blog.clusterlabs.org/blog/2015/sbd-fun-and-profit
> >
> > I installed watchdog.
> >
> > my /etc/sysconfig/sbd is.
> >
> > SBD_DELAY_START=no
> > SBD_OPTS=
> > SBD_PACEMAKER=yes
> > SBD_STARTMODE=clean
> > SBD_WATCHDOG_DEV=/dev/watchdog
> > SBD_WATCHDOG_TIMEOUT=5
> >
> > the sbd-fun-and-profit says to use this command.
> >
> > virsh edit vmnode
> >
> > But there is no vmnode and no instructions on how to create it.
> >
> > Is anyone able to piece together the missing steps?
> >
> >
> > Thank you.
> >
> > Durwin F. De La Rue
> > Management Sciences, Inc.
> > 6022 Constitution Ave. NE
> > Albuquerque, NM  87110
> > Phone (505) 255-8611
> >
> >
> > This email message and any attachments are for the sole use of the 
intended
> > recipient(s) and may contain proprietary and/or confidential 
information
> > which may be privileged or otherwise protected from disclosure. Any
> > unauthorized review, use, disclosure or distribution is prohibited. If 
you
> > are not the intended recipient(s), please contact the sender by reply 
email
> > and destroy the original message and any copies of the message as well 
as
> &g

Re: [ClusterLabs] fence_vbox '--action=' not executing action

2017-02-02 Thread durwin
Kristoffer Grönlund <kgronl...@suse.com> wrote on 02/01/2017 10:49:54 PM:

> From: Kristoffer Grönlund <kgronl...@suse.com>
> To: dur...@mgtsciences.com, users@clusterlabs.org
> Date: 02/01/2017 11:23 PM
> Subject: Re: [ClusterLabs] fence_vbox '--action=' not executing action
> 
> dur...@mgtsciences.com writes:
> 
> > I have 2 Fedora 24 Virtualbox machines running on Windows 10 host.  On 
the 
> > host from DOS shell I can start 'node1' with,
> >
> > VBoxManage.exe startvm node1 --type headless
> >
> > I can shut it down with,
> >
> > VBoxManage.exe controlvm node1 acpipowerbutton
> >
> > But running fence_vbox from 'node2' does not work correctly.  Below 
are 
> > two commands and the output.  First action is 'status' second action 
is 
> > 'off'.  The both get list of running nodes, but 'off' does *not* 
shutdown 
> > or kill the node.
> >
> > Any ideas?
> 
> I haven't tested with Windows as the host OS for fence_vbox (I wrote the
> initial implementation of the agent). My guess from looking at your
> usage is that passing "cmd" to --ssh-options might not be
> sufficient to get it to work in that environment, but I have no idea
> what the right arguments might be.
> 
> Another possibility is that the command that fence_vbox tries to run
> doesn't work for you for some reason. It will either call
> 
> VBoxManage startvm  --type headless
> 
> or
> 
> VBoxManage controlvm  poweroff
> 
> when passed on or off as the --action parameter.

If there is no further work being done on fence_vbox, is there a 'dummy' 
fence
which I might use to make STONITH happy in my configuration?  It need only 
send
the correct signals to STONITH so that I might create an active/active 
cluster
to experiment with?  This is only an experimental configuration.

Thank you,

Durwin

> 
> Cheers,
> Kristoffer
> 
> >
> > Thank you,
> >
> > Durwin
> >
> >
> > 02:04 PM root@node2 ~
> > fc25> fence_vbox --verbose --ip=172.23.93.249 --username=durwin 
> > --identity-file=/root/.ssh/id_rsa.pub --password= --plug="node1" 
> > --ssh-options="cmd" --command-prompt='>' --login-timeout=10 
> > --shell-timeout=20 --action=status
> > Running command: /usr/bin/ssh  durwin@172.23.93.249 -i 
> > /root/.ssh/id_rsa.pub -p 22 cmd
> > Received: Enter passphrase for key '/root/.ssh/id_rsa.pub':
> > Sent:
> >
> > Received:
> > stty: 'standard input': Inappropriate ioctl for device
> > Microsoft Windows [Version 10.0.14393]
> > (c) 2016 Microsoft Corporation. All rights reserved.
> >
> > D:\home\durwin>
> > Sent: VBoxManage list runningvms
> >
> > Received: VBoxManage list runningvms
> > VBoxManage list runningvms
> >
> > D:\home\durwin>
> > Sent: VBoxManage list vms
> >
> > Received: VBoxManage list vms
> > VBoxManage list vms
> > "node2" {14bff1fe-bd26-4583-829d-bc3a393b2a01}
> > "node1" {5a029c3c-4549-48be-8e80-c7a67584cd98}
> >
> > D:\home\durwin>
> > Status: OFF
> > Sent: quit
> >
> >
> >
> > 02:05 PM root@node2 ~
> > fc25> fence_vbox --verbose --ip=172.23.93.249 --username=durwin 
> > --identity-file=/root/.ssh/id_rsa.pub --password= --plug="node1" 
> > --ssh-options="cmd" --command-prompt='>' --login-timeout=10 
> > --shell-timeout=20 --action=off
> > Delay 0 second(s) before logging in to the fence device
> > Running command: /usr/bin/ssh  durwin@172.23.93.249 -i 
> > /root/.ssh/id_rsa.pub -p 22 cmd
> > Received: Enter passphrase for key '/root/.ssh/id_rsa.pub':
> > Sent:
> >
> > Received:
> > stty: 'standard input': Inappropriate ioctl for device
> > Microsoft Windows [Version 10.0.14393]
> > (c) 2016 Microsoft Corporation. All rights reserved.
> >
> > D:\home\durwin>
> > Sent: VBoxManage list runningvms
> >
> > Received: VBoxManage list runningvms
> > VBoxManage list runningvms
> >
> > D:\home\durwin>
> > Sent: VBoxManage list vms
> >
> > Received: VBoxManage list vms
> > VBoxManage list vms
> > "node2" {14bff1fe-bd26-4583-829d-bc3a393b2a01}
> > "node1" {5a029c3c-4549-48be-8e80-c7a67584cd98}
> >
> > D:\home\durwin>
> > Success: Already OFF
> > Sent: quit
> >
> >
> > Durwin F. De La Rue
> > Management Sciences, Inc.
> > 6022 Constitution Ave. NE
> > Albuquerque, NM  87110
> > Phone (505) 255-8611
> >
> >
> > This email message and a

Re: [ClusterLabs] Can fence_vbox ssh-options be configured to use Windows DOS shell?

2017-01-26 Thread durwin
Marek Grac <mg...@redhat.com> wrote on 01/26/2017 09:19:41 AM:

> From: Marek Grac <mg...@redhat.com>
> To: Cluster Labs - All topics related to open-source clustering 
> welcomed <users@clusterlabs.org>
> Date: 01/26/2017 09:20 AM
> Subject: Re: [ClusterLabs] Can fence_vbox ssh-options be configured 
> to use Windows DOS shell?
> 
> Hi,
> 
> On Thu, Jan 26, 2017 at 5:06 PM, <dur...@mgtsciences.com> wrote:
> I have Windows 10 running Virtualbox with 2 VMs running Fedora 25.  
> I have followed 'Pacemaker 1.1 Clusters from Scratch' 9th edition up
> through chapter 7.  It works.  I am now trying to fence VMs.  I use 
> Cygwin ssh daemon and of course bash is default for the options. 
> 
> I have used command below from one of the nodes and get the following 
return.
> 
> fence_vbox -vv --ip=172.23.93.249 --username=durwin --identity-
> file=/root/.ssh/id_rsa.pub --password= --plug="node1" --action=off 
> 
> add --verbose please.
> 
> but it looks, you will have to change --ssh-options so it does not 
> execute /bin/bash; it should be be enough to set it to "". You will 
> also have to set --command-prompt to an appropriate value then. 
> 
> m,
>  

Thank you.  Verbose is set '-vv'.

I added --ssh-options="" to the command, see below.  I do not know how to 
find out what value command-prompt needs.  What do I look for?

fc25> fence_vbox --verbose --ip=172.23.93.249 --username=durwin 
--identity-file=/root/.ssh/id_rsa.pub --password= --plug="node1" 
--action=off --ssh-options=""
Delay 0 second(s) before logging in to the fence device
Running command: /usr/bin/ssh  durwin@172.23.93.249 -i 
/root/.ssh/id_rsa.pub -p 22
Timeout exceeded.

command: /usr/bin/ssh
args: ['/usr/bin/ssh', 'durwin@172.23.93.249', '-i', 
'/root/.ssh/id_rsa.pub', '-p', '22']
buffer (last 100 chars): "durwin@172.23.93.249's password: "
before (last 100 chars): "durwin@172.23.93.249's password: "
after: 
match: None
match_index: None
exitstatus: None
flag_eof: False
pid: 17014
child_fd: 6
closed: False
timeout: 30
delimiter: 
logfile: None
logfile_read: None
logfile_send: None
maxread: 2000
ignorecase: False
searchwindowsize: None
delaybeforesend: 0.05
delayafterclose: 0.1
delayafterterminate: 0.1
searcher: searcher_re:
0: re.compile("Enter passphrase for key '/root/.ssh/id_rsa.pub':")
1: re.compile("Are you sure you want to continue connecting 
(yes/no)?")
2: re.compile("\[EXPECT\]#\ ")
Unable to connect/login to fencing device


> Delay 0 second(s) before logging in to the fence device 
> Running command: /usr/bin/ssh  durwin@172.23.93.249 -i /root/.ssh/
> id_rsa.pub -p 22 -t '/bin/bash -c "PS1=\\[EXPECT\\]#\  /bin/bash --
> noprofile --norc"' 
> Received: Enter passphrase for key '/root/.ssh/id_rsa.pub': 
> Sent: 
> 
> Received: 
> [EXPECT]# 
> Sent: VBoxManage list runningvms 
> 
> Connection timed out 
> 
> I tried the following commands from a DOS shell on the Windows host 
> and commands successfully executed (from Cygwin terminal it fails). 
> 
> VBoxManage controlvm node1 acpipowerbutton 
> VBoxManage startvm node1 --type=gui 
> 
> I am aware that some Windows executables do not communicate with 
> Cygwin terminals.  Is there a way to pass ssh options so that 
> VBoxManage is executed from DOS shell? 
> 
> Thank you, 
> 
> Durwin
> 
> 
> This email message and any attachments are for the sole use of the 
> intended recipient(s) and may contain proprietary and/or 
> confidential information which may be privileged or otherwise 
> protected from disclosure. Any unauthorized review, use, disclosure 
> or distribution is prohibited. If you are not the intended recipient
> (s), please contact the sender by reply email and destroy the 
> original message and any copies of the message as well as any 
> attachments to the original message.
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org



This email message and any attachments are for the sole use of the 
intended recipient(s) and may contain proprietary and/or confidential 
information which may be privileged or otherwise protected from 
disclos

[ClusterLabs] Can fence_vbox ssh-options be configured to use Windows DOS shell?

2017-01-26 Thread durwin
I have Windows 10 running Virtualbox with 2 VMs running Fedora 25.  I have 
followed 'Pacemaker 1.1 Clusters from Scratch' 9th edition up through 
chapter 7.  It works.  I am now trying to fence VMs.  I use Cygwin ssh 
daemon and of course bash is default for the options.

I have used command below from one of the nodes and get the following 
return.

fence_vbox -vv --ip=172.23.93.249 --username=durwin 
--identity-file=/root/.ssh/id_rsa.pub --password= --plug="node1" 
--action=off
Delay 0 second(s) before logging in to the fence device
Running command: /usr/bin/ssh  durwin@172.23.93.249 -i 
/root/.ssh/id_rsa.pub -p 22 -t '/bin/bash -c "PS1=\\[EXPECT\\]#\ /bin/bash 
--noprofile --norc"'
Received: Enter passphrase for key '/root/.ssh/id_rsa.pub':
Sent:

Received:
[EXPECT]#
Sent: VBoxManage list runningvms

Connection timed out

I tried the following commands from a DOS shell on the Windows host and 
commands successfully executed (from Cygwin terminal it fails).

VBoxManage controlvm node1 acpipowerbutton
VBoxManage startvm node1 --type=gui

I am aware that some Windows executables do not communicate with Cygwin 
terminals.  Is there a way to pass ssh options so that VBoxManage is 
executed from DOS shell?

Thank you,

Durwin


This email message and any attachments are for the sole use of the 
intended recipient(s) and may contain proprietary and/or confidential 
information which may be privileged or otherwise protected from 
disclosure. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient(s), please contact the 
sender by reply email and destroy the original message and any copies of 
the message as well as any attachments to the original message.___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] How to Fence Virtualbox VM with Windows 10 as host.

2017-01-25 Thread durwin
I found the place where UUID is specified (Virtual Media Manager).

Thank you.

Durwin

> From: Marek Grac <mg...@redhat.com>
> To: Cluster Labs - All topics related to open-source clustering 
> welcomed <users@clusterlabs.org>
> Date: 01/25/2017 01:34 AM
> Subject: Re: [ClusterLabs] How to Fence Virtualbox VM with Windows 10 as 
host.
> 
> Hi,
> 
> On Tue, Jan 24, 2017 at 9:06 PM, <dur...@mgtsciences.com> wrote:
> This is my first attempt at clustering, just so you know the level 
> required to convey ideas. 
> 
> I have Windows 10 running Virtualbox with 2 VMs running Fedora 25.  
> I have followed 'Pacemaker 1.1 Clusters from Scratch' 9th edition up
> through chapter 7.  It works.  I am uncertain as how to fence the 
> VMs with Windows 10 as host.  The output from 'pcs stonith describe 
> fence_vbox' is below. 
> 
> I have Cygwin installed with sshd configured and running.  I can 
> remotely ssh into the Windows 10 machine.  I can add the keys from 
> the machines into Windows authorized_keys so no user/password is 
> required.  I however do not know which of the options are 
> *required*.  Nor do I know what the options should be set to.  Some 
> of the options *are* obvious.  If I use *only* required ones, ipaddr
> is obvious, login is obvious, but not sure what port is.  Would it 
> be the name of the VM as Virtualbox knows it? 
> 
> ipaddr (required): IP address or hostname of fencing device 
> login (required): Login name 
> port (required): Physical plug number on device, UUID or 
> identification of machine 
> 
> Does the host require anything running on it to support the fence?  
> Do I require any other options in addition to 'required'?  How do I 
> test it from a nodes commandline? 
> 
> You can take  a look at manual page of fence_vbox (or run 
fence_vbox--help). 
> 
> In your case it should be enough to set:
> * ipaddr
> * login
> * port (= node to shutdown)
> * identity_file (= private key)
> 
> m,___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org



This email message and any attachments are for the sole use of the 
intended recipient(s) and may contain proprietary and/or confidential 
information which may be privileged or otherwise protected from 
disclosure. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient(s), please contact the 
sender by reply email and destroy the original message and any copies of 
the message as well as any attachments to the original message.___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] How to Fence Virtualbox VM with Windows 10 as host.

2017-01-25 Thread durwin
Marek Grac <mg...@redhat.com> wrote on 01/25/2017 01:33:07 AM:

> From: Marek Grac <mg...@redhat.com>
> To: Cluster Labs - All topics related to open-source clustering 
> welcomed <users@clusterlabs.org>
> Date: 01/25/2017 01:34 AM
> Subject: Re: [ClusterLabs] How to Fence Virtualbox VM with Windows 10 as 
host.
> 
> Hi,
> 
> On Tue, Jan 24, 2017 at 9:06 PM, <dur...@mgtsciences.com> wrote:
> This is my first attempt at clustering, just so you know the level 
> required to convey ideas. 
> 
> I have Windows 10 running Virtualbox with 2 VMs running Fedora 25.  
> I have followed 'Pacemaker 1.1 Clusters from Scratch' 9th edition up
> through chapter 7.  It works.  I am uncertain as how to fence the 
> VMs with Windows 10 as host.  The output from 'pcs stonith describe 
> fence_vbox' is below. 
> 
> I have Cygwin installed with sshd configured and running.  I can 
> remotely ssh into the Windows 10 machine.  I can add the keys from 
> the machines into Windows authorized_keys so no user/password is 
> required.  I however do not know which of the options are 
> *required*.  Nor do I know what the options should be set to.  Some 
> of the options *are* obvious.  If I use *only* required ones, ipaddr
> is obvious, login is obvious, but not sure what port is.  Would it 
> be the name of the VM as Virtualbox knows it? 
> 
> ipaddr (required): IP address or hostname of fencing device 
> login (required): Login name 
> port (required): Physical plug number on device, UUID or 
> identification of machine 
> 
> Does the host require anything running on it to support the fence?  
> Do I require any other options in addition to 'required'?  How do I 
> test it from a nodes commandline? 
> 
> You can take  a look at manual page of fence_vbox (or run 
fence_vbox--help). 
> 
> In your case it should be enough to set:
> * ipaddr
> * login
> * port (= node to shutdown)

Thank you.  man page says.

   -n, --plug=[id]
  Physical plug number on device, UUID or identification of 
machine This parameter is always required.

How do I determine the 'plug' of the VM?  I the output of 'pcs stonith 
describe fence_vbox' refers to this as port (to my understanding, very 
misleading).

>From your reply, am I correct in assuming nothing to support Fencing is 
required to run on the Windows host?

Thank you,

Durwin

> * identity_file (= private key)
> 
> m,___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org



This email message and any attachments are for the sole use of the 
intended recipient(s) and may contain proprietary and/or confidential 
information which may be privileged or otherwise protected from 
disclosure. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient(s), please contact the 
sender by reply email and destroy the original message and any copies of 
the message as well as any attachments to the original message.___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] How to Fence Virtualbox VM with Windows 10 as host.

2017-01-24 Thread durwin
This is my first attempt at clustering, just so you know the level 
required to convey ideas.

I have Windows 10 running Virtualbox with 2 VMs running Fedora 25.  I have 
followed 'Pacemaker 1.1 Clusters from Scratch' 9th edition up through 
chapter 7.  It works.  I am uncertain as how to fence the VMs with Windows 
10 as host.  The output from 'pcs stonith describe fence_vbox' is below.

I have Cygwin installed with sshd configured and running.  I can remotely 
ssh into the Windows 10 machine.  I can add the keys from the machines 
into Windows authorized_keys so no user/password is required.  I however 
do not know which of the options are *required*.  Nor do I know what the 
options should be set to.  Some of the options *are* obvious.  If I use 
*only* required ones, ipaddr is obvious, login is obvious, but not sure 
what port is.  Would it be the name of the VM as Virtualbox knows it?

ipaddr (required): IP address or hostname of fencing device
login (required): Login name
port (required): Physical plug number on device, UUID or 
identification of machine

Does the host require anything running on it to support the fence?  Do I 
require any other options in addition to 'required'?  How do I test it 
from a nodes commandline?


Thank you,

Durwin


fc25> pcs stonith describe fence_vbox
fence_vbox - Fence agent for VirtualBox

fence_vbox is an I/O Fencing agent which can be used with the virtual 
machines managed by VirtualBox. It logs via ssh to a dom0 where it runs 
VBoxManage to do all of the work.
.P
By default, vbox needs to log in as a user that is a member of the 
vboxusers group. Also, you must allow ssh login in your sshd_config.

Resource options:
  action: Fencing action WARNING: specifying 'action' is deprecated and 
not necessary with current Pacemaker versions
  cmd_prompt: Force Python regex for command prompt
  identity_file: Identity file (private key) for SSH
  inet4_only: Forces agent to use IPv4 addresses only
  inet6_only: Forces agent to use IPv6 addresses only
  ipaddr (required): IP address or hostname of fencing device
  ipport: TCP/UDP port to use for connection with device
  login (required): Login name
  passwd: Login password or passphrase
  passwd_script: Script to run to retrieve password
  port (required): Physical plug number on device, UUID or identification 
of machine
  secure: Use SSH connection
  ssh_options: SSH options to use
  separator: Separator for CSV created by 'list' operation
  delay: Wait X seconds before fencing is started
  login_timeout: Wait X seconds for cmd prompt after login
  missing_as_off: Missing port returns OFF instead of failure
  power_timeout: Test X seconds for status change after ON/OFF
  power_wait: Wait X seconds after issuing ON/OFF
  shell_timeout: Wait X seconds for cmd prompt after issuing command
  retry_on: Count of attempts to retry power on
  sudo: Use sudo (without password) when calling 3rd party software
  ssh_path: Path to ssh binary
  sudo_path: Path to sudo binary
  priority: The priority of the stonith resource. Devices are tried in 
order of highest priority to lowest.
  pcmk_host_map: A mapping of host names to ports numbers for devices that 
do not support host names. Eg. node1:1;node2:2,3 would tell the cluster to 
use port 1 for node1 and ports 2 and 3 for node2
  pcmk_host_list: A list of machines controlled by this device (Optional 
unless pcmk_host_check=static-list).
  pcmk_host_check: How to determine which machines are controlled by the 
device. Allowed values: dynamic-list (query the device), static-list 
(check the pcmk_host_list attribute), none (assume every device can fence 
every machine)
  pcmk_delay_max: Enable random delay for stonith actions and specify the 
maximum of random delay This prevents double fencing when using slow 
devices such as sbd. Use this to enable random delay for stonith actions 
and specify the maximum of random delay.
  pcmk_action_limit: The maximum number of actions can be performed in 
parallel on this device Pengine property concurrent-fencing=true needs to 
be configured first. Then use this to specify the maximum number of 
actions can be performed in parallel on this device. -1 is unlimited.

Durwin F. De La Rue
Management Sciences, Inc.
6022 Constitution Ave. NE
Albuquerque, NM  87110
Phone (505) 255-8611


This email message and any attachments are for the sole use of the 
intended recipient(s) and may contain proprietary and/or confidential 
information which may be privileged or otherwise protected from 
disclosure. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient(s), please contact the 
sender by reply email and destroy the original message and any copies of 
the message as well as any attachments to the original message.___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: h