Re: [ClusterLabs] Stonith stops after vSphere restart

2018-02-22 Thread Marek Grac
Hi,

On Thu, Feb 22, 2018 at 11:58 AM,  wrote:

>
> Hi,
>
> I have a 2 node pacemaker cluster configured with the fence agent
> vmware_soap.
> Everything works fine until the vCenter is restarted. After that, stonith
> fails and stop.
>

This is expected as we run 'monitor' action to find out if fence device is
working. I assume that it is not responding when vCenter is restarting. If
your fencing device fails then manual intervention makes sense as you have
to have fencing working  in order to prevent data corruption.

m,


>
> [root@node1 ~]# pcs status
> Cluster name: psqltest
> Stack: corosync
> Current DC: node2 (version 1.1.16-12.el7_4.7-94ff4df) - partition with
> quorum
> Last updated: Thu Feb 22 11:30:22 2018
> Last change: Mon Feb 19 09:28:37 2018 by root via crm_resource on node1
>
> 2 nodes configured
> 6 resources configured
>
> Online: [ node1 node2 ]
>
> Full list of resources:
>
> Master/Slave Set: ms_drbd_psqltest [drbd_psqltest]
> Masters: [ node1 ]
> Slaves: [ node2 ]
> Resource Group: pgsqltest
> psqltestfs (ocf::heartbeat:Filesystem): Started node1
> psqltest_vip (ocf::heartbeat:IPaddr2): Started node1
> postgresql-94 (ocf::heartbeat:pgsql): Started node1
> vmware_soap (stonith:fence_vmware_soap): Stopped
>
> Failed Actions:
> * vmware_soap_start_0 on node1 'unknown error' (1): call=38, status=Error,
> exitreason='none',
> last-rc-change='Thu Feb 22 10:55:46 2018', queued=0ms, exec=5374ms
> * vmware_soap_start_0 on node2 'unknown error' (1): call=56, status=Error,
> exitreason='none',
> last-rc-change='Thu Feb 22 10:55:39 2018', queued=0ms, exec=5479ms
>
> Daemon Status:
> corosync: active/enabled
> pacemaker: active/enabled
> pcsd: active/enabled
>
>
> [root@node1 ~]# pcs stonith show --full
> Resource: vmware_soap (class=stonith type=fence_vmware_soap)
> Attributes: inet4_only=1 ipaddr=192.168.1.1 ipport=443 login=MYDOMAIN\User
> passwd=mypass pcmk_host_list=node1,node2 power_wait=3 ssl_insecure=1
> action= pcmk_list_timeout=120s pcmk_monitor_timeout=120s
> pcmk_status_timeout=120s
> Operations: monitor interval=60s (vmware_soap-monitor-interval-60s)
>
>
> I need to manually perform a "resource cleanup vmware_soap" to put it
> online again.
> Is there any way to do this automatically?.
> Is it possible to detect vSphere online again and enable stonith?.
>
> Thanks.
>
> ___
> Users mailing list: Users@clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] 答复: How to create the stonith resource in virtualbox

2018-02-09 Thread Marek Grac
Hi,

for fence_vbox take a look at my older blogpost>
https://ox.sk/howto-fence-vbox-cdd3da374ecd

if all you need is to have fencing in a state when dlm works and you
promise that you will never have real data on it. There is an easy hack, it
really does not matter which fence agent you use. All we care about is if
action 'monitor' works, so add option>

pcmk_monitor_action=metadata

It means that instead of monitor action, you will use action 'metadata'
which just prints XML metadata and succeed.

m,

On Fri, Feb 9, 2018 at 6:33 AM, 范国腾  wrote:

> Thank Klaus,
>
> The information is very helpful. I try to study the fence_vbox and the
> fence_sdb.
>
> In our test lab, we use ipmi as the stonith. But I want to setup a
> simulator environment in my laptop. So I just need the stonith resource in
> start state so that I could create dlm and clvm resource.And I don't need
> it relally work. Do anybody have other suggestion?
>
>
> -邮件原件-
> 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Klaus Wenninger
> 发送时间: 2018年2月9日 1:11
> 收件人: users@clusterlabs.org
> 主题: Re: [ClusterLabs] How to create the stonith resource in virtualbox
>
> On 02/08/2018 02:05 PM, Andrei Borzenkov wrote:
> > On Thu, Feb 8, 2018 at 5:51 AM, 范国腾  wrote:
> >> Hello,
> >>
> >> I setup the pacemaker cluster using virtualbox. There are three nodes.
> The OS is centos7, the /dev/sdb is the shared storage(three nodes use the
> same disk file).
> >>
> >> (1) At first, I create the stonith using this command:
> >> pcs stonith create scsi-stonith-device fence_scsi
> >> devices=/dev/mapper/fence pcmk_monitor_action=metadata
> >> pcmk_reboot_action=off pcmk_host_list="db7-1 db7-2 db7-3" meta
> >> provides=unfencing;
> >>
> >> I know the VM not have the /dev/mapper/fence. But sometimes the stonith
> resource able to start, sometimes not. Don't know why. It is not stable.
> >>
> > It probably tries to check resource and fails. State of stonith
> > resource is irrelevant for actual fencing operation (this resource is
> > only used for periodical check, not for fencing itself).
> >
> >> (2) Then I use the following command to setup stonith using the shared
> disk /dev/sdb:
> >> pcs stonith create scsi-shooter fence_scsi
> >> devices=/dev/disk/by-id/ata-VBOX_HARDDISK_VBc833e6c6-af12c936 meta
> >> provides=unfencing
> >>
> >> But the stonith always be stopped and the log show:
> >> Feb  7 15:45:53 db7-1 stonith-ng[8166]: warning: fence_scsi[8197]
> >> stderr: [ Failed: nodename or key is required ]
> >>
> > Well, you need to provide what is missing - your command did not
> > specify any host.
> >
> >> Could anyone help tell what is the correct command to setup the stonith
> in VM and centos? Is there any document to introduce this so that I could
> study it?
>
> I personally don't have any experience setting up a pacemaker-cluster in
> vbox.
>
> Thus I'm limited to giving rather general advice.
>
> What you might have to assure together with fence_scsi is if the
> scsi-emulation vbox offers lives up to the requirements of fence_scsi.
> I've read about troubles in a posting back from 2015. The guy then went
> for using scsi via iSCSI.
>
> Otherwise you could look for alternatives to fence_scsi.
>
> One might be fence_vbox. It doesn't come with centos so far iirc but the
> upstream repo on github has it.
> Fencing via the hypervisor is in general not a bad idea when it comes to
> clusters running in VMs (If you can live with the boundary conditions like
> giving certain credentials to the VMs that allow communication with the
> hypervisor.).
> There was some discussion about fence_vbox on the clusterlabs-list a
> couple of months ago. iirc there had been issues with using windows as a
> host for vbox - but I guess they were fixed in the course of this
> discussion.
>
> Another way of doing fencing via a shared disk is fence_sbd (available in
> centos) - although quite different from how fence_scsi is using the disk.
> One difference that might be helpful here is that it has less requirements
> on which disk-infrastructure is emulated.
> On the other hand it is strongly advised for sbd in general to use a good
> watchdog device (one that brings down your machine - virtual or physical -
> in a very reliable manner). And afaik the only watchdog-device available
> inside a vbox VM is softdog that doesn't meet this requirement too well as
> it relies on the kernel running in the VM to be at least partially
> functional.
>
> Sorry for not being able to help in a more specific way but I would be
> interested in which ways of fencing people are using when it comes to
> clusters based on vbox VMs myself ;-)
>
> Regards,
> Klaus
> >>
> >>
> >> Thanks
> >>
> >>
> >> Here is the cluster status:
> >> [root@db7-1 ~]# pcs status
> >> Cluster name: cluster_pgsql
> >> Stack: corosync
> >> Current DC: db7-2 (version 1.1.16-12.el7_4.7-94ff4df) - partition
> >> with quorum Last updated: Wed Feb  7 16:27:13 2018 Last 

Re: [ClusterLabs] Antw: fence_vmware_soap: reads VM status but fails to reboot/on/off

2017-08-01 Thread Marek Grac
Hi,

> But when I call any of the power actions (on, off, reboot) I get "Failed:
> > Timed out waiting to power OFF".
> >
> > I've tried with all the combinations of --power-timeout and --power-wait
> > and same error without any change in the response time.
> >
> > Any ideas from where or how to fix this issue ?
>

No, you have used the right options and if they were high enough it should
work. You can try to post verbose (anonymized) output and we can take a
look at it more deeply.


>
> I suspect "power off" is actually a virtual press of the ACPI power button
> (reboot likewise), so your VM tries to shut down cleanly. That could take
> time, and it could hang (I guess). I don't use VMware, but maybe there's a
> "reset" action that presses the virtual reset button of the virtual
> hardware... ;-)
>

There should not be a fence agent that will do soft reboot. The 'reset'
action does  power off/check status/power on so we are sure that machine
was really down (of course unless --method cycle when 'reboot' button is
used).

m,
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] fence_vbox Unable to connect/login to fencing device

2017-07-11 Thread Marek Grac
Hi,

On Tue, Jul 11, 2017 at 11:13 AM, ArekW <arkad...@gmail.com> wrote:

> Hi,
> I may be wrong but it doesn't seem to be timeout problem because the log
> repeats the same way every few minutes and it contains "Unable to connect"
> and just after that there is list of vms etc so It has connected
> successfully.
>

After an un-succesful attempt to monitor, your settings my attempt to do
next attempt. In some cases, second ssh connection may be much faster. So
second attempt will success more often.


> I described a active-active failover problem in separate mail. When a node
> is poweroff the cluster enters UNCLEAN status and whole thing hungs. Could
> it be related to stonith problem? I'm out of ideas what is wrong because I
> seems to work manually but seems not to work as a fence process.
> How can I increase the login_timeout (Is it for stonith?)
>

add login_timeout=XXs (or look at manual pages for other timeout options)

m,


> Thanks
> Arek
>
> 2017-07-10 13:10 GMT+02:00 Marek Grac <mg...@redhat.com>:
>
>>
>>
>> On Fri, Jul 7, 2017 at 1:45 PM, ArekW <arkad...@gmail.com> wrote:
>>
>>> The reason for --force is:
>>> Error: missing required option(s): 'ipaddr, login, plug' for resource
>>> type: stonith:fence_vbox (use --force to override)
>>>
>>
>> It looks like you use unreleased upstream of fence agents without a
>> similary new version of pcs (with the commit 7f85340b7aa4e8c016720012cf42c3
>> 04e68dd1fe)
>>
>>
>>>
>>> I have selinux disabled on both nodes:
>>> [root@nfsnode1 ~]# cat /etc/sysconfig/selinux
>>> SELINUX=disabled
>>>
>>> pcs stonith update vbox-fencing verbose=true
>>> Error: resource option(s): 'verbose', are not recognized for resource
>>> type: 'stonith::fence_vbox' (use --force to override)
>>>
>>
>> It shoulbe fixed in commit b47558331ba6615aa5720484301d644cc8e973fd (Jun
>> 12)
>>
>>
>>>
>>>
>>
>>>
>>> Jul  7 13:37:49 nfsnode1 fence_vbox: Unable to connect/login to fencing
>>> device
>>> Jul  7 13:37:49 nfsnode1 stonith-ng[2045]: warning: fence_vbox[4765]
>>> stderr: [ Running command: /usr/bin/ssh -4  AW23321@10.0.2.2 -i
>>> /root/.ssh/id_rsa -p 22 -t '/bin/bash -c "PS1=\\[EXPECT\\]#\  /bin/bash
>>> --noprofile --norc"' ]
>>>
>>
>> ok, so sometimes it works and sometimes not. It looks like that our
>> timeouts are set quite strict for your environment. Try to increase
>> login_timeout from default 30s higher.
>>
>> m,
>>
>> ___
>> Users mailing list: Users@clusterlabs.org
>> http://lists.clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] fence_vbox Unable to connect/login to fencing device

2017-07-10 Thread Marek Grac
On Fri, Jul 7, 2017 at 1:45 PM, ArekW  wrote:

> The reason for --force is:
> Error: missing required option(s): 'ipaddr, login, plug' for resource
> type: stonith:fence_vbox (use --force to override)
>

It looks like you use unreleased upstream of fence agents without a
similary new version of pcs (with the commit
7f85340b7aa4e8c016720012cf42c304e68dd1fe)


>
> I have selinux disabled on both nodes:
> [root@nfsnode1 ~]# cat /etc/sysconfig/selinux
> SELINUX=disabled
>
> pcs stonith update vbox-fencing verbose=true
> Error: resource option(s): 'verbose', are not recognized for resource
> type: 'stonith::fence_vbox' (use --force to override)
>

It shoulbe fixed in commit b47558331ba6615aa5720484301d644cc8e973fd (Jun 12)


>
>

>
> Jul  7 13:37:49 nfsnode1 fence_vbox: Unable to connect/login to fencing
> device
> Jul  7 13:37:49 nfsnode1 stonith-ng[2045]: warning: fence_vbox[4765]
> stderr: [ Running command: /usr/bin/ssh -4  AW23321@10.0.2.2 -i
> /root/.ssh/id_rsa -p 22 -t '/bin/bash -c "PS1=\\[EXPECT\\]#\  /bin/bash
> --noprofile --norc"' ]
>

ok, so sometimes it works and sometimes not. It looks like that our
timeouts are set quite strict for your environment. Try to increase
login_timeout from default 30s higher.

m,
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] fence_vbox Unable to connect/login to fencing device

2017-07-07 Thread Marek Grac
Hi,

On Fri, Jul 7, 2017 at 8:02 AM, ArekW  wrote:

> Hi,
> I did a small research on the scripts
>
> /usr/sbin/fence_vbox
> def main():
> ...
> conn = fence_login(options)
>
> The fence_loging is scripted in the fencing.py and it should invoke
> function: _login_ssh_with_identity_file
>
> /usr/share/fence/fencing.py
> def _login_ssh_with_identity_file:
> ...
> command = '%s %s %s@%s -i %s -p %s' % \
> (options["--ssh-path"], force_ipvx, options["--username"],
> options["--ip"], \
> options["--identity-file"], options["--ipport"])
>
> There are username and ip parameter used here (not login and ipaddr as in
> fence description) so I used:
>

You have noticed this right, this is due to backward compatibility. And we
are working towards ability to use command-line options everywhere (it is
already in upstream but it is not yet supported in pcs).

So 'login=FOO' is same as '--username FOO/-l FOO'. Misleading at least. The
mapping between those systems was available on our wiki pages, it is
available in documentation and in (somewhat less readable way) in manual
page.



>
> pcs stonith create vbox-fencing fence_vbox ip=10.0.2.2 username=AW23321
> identity_file=/root/.ssh/id_rsa host_os=windows
> vboxmanage_path="/cygdrive/c/Program\ Files/Oracle/VirtualBox/VBoxManage"
> pcmk_host_map="nfsnode1:centos1;nfsnode2:centos2" ssh=true
> inet4_only=true op monitor interval=5 -force
>

* Why are you using -force?

* ssh=true is not a valid option (=> it is ignored and warning should be in
the logs) and fence_vbox can use ssh only. [secure=true will do what you
want]



>
> I still got the same warning in messages:
> Jul  7 07:52:24 nfsnode1 stonith-ng[6244]: warning: fence_vbox[21564]
> stderr: [ Unable to connect/login to fencing device ]
> Jul  7 07:52:24 nfsnode1 stonith-ng[6244]: warning: fence_vbox[21564]
> stderr: [  ]
> Jul  7 07:52:24 nfsnode1 stonith-ng[6244]: warning: fence_vbox[21564]
> stderr: [  ]
>
> "Standalone" test is working with the same parameters:
> [root@nfsnode1 nfsinfo]# fence_vbox --ip 10.0.2.2 --username=AW23321
> --identity-file=/root/.ssh/id_rsa --plug=centos2 --host-os=windows
> --action=status --vboxmanage-path="/cygdrive/c/Program\
> Files/Oracle/VirtualBox/VBoxManage" -4 -x
> Status: ON
>

This looks like SELinux for me. From the command line, you are in
unconfined domain so no checks are performed. Try to look at SELinux
boolean "fenced_can_ssh"


> I could use more debug in the scripts.
>
You can use verbose=true (-v) and it will display all input/output
operations. In case of the fence_vbox you will see what we attempt to run
and what is the output of these commands. If there is need for more detail
output, please let me know and I will try to add it.

m,
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] IPMI and APC switched PDUs fencing agents

2017-06-07 Thread Marek Grac
Hi,

On Tue, Jun 6, 2017 at 3:52 PM, Jean-Francois Malouin <
jean-francois.malo...@bic.mni.mcgill.ca> wrote:

> Hi,
>
> Starting to configure a two-nodes cluster, runnning Debian Jessie 8.8
> Stack: corosync, pacemaker version 1.1.15-e174ec8.
>
> I'm confused with the different fencing agents related to IMPI, it seems
> that there are a few: external/ipmi, fence_ipmilan, ipmilan.  Right now
> I'm using external/ipmi which seems to do the job as far as the little
> testing I've done. But can someone explain in what the other agents
> differ?
>

In general, they are almost same as they are using ipmitool binary. There
are few differences, e.g. better configurability or some specific features
but as long as your agent works there is no reason to change it.


>
> I also have 2 switched PDUs, APC AP7921 that I plan to use as second
> stonith devices. Any hints and best practice, docs on how to make this
> work in conjonction with the IPMI fencing agent.
>

Take a look at

http://wiki.clusterlabs.org/wiki/Configure_Multiple_Fencing_Devices_Using_pcs

if you are using crm, the syntax will differ but the idea will be the same.

m,
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Question about fence_mpath

2017-05-02 Thread Marek Grac
Hi,

On Fri, Apr 28, 2017 at 8:09 PM, Chris Adams  wrote:

> It creates, but any time anything tries to fence (manually or by
> rebooting a node), I get errors in /var/log/messages.  Trying to
> manually fence a node gets:
>
> # pcs stonith fence node2 --off
> Error: unable to fence 'node2'
> Command failed: No such device
>
> Another issue I run into is that fence_mpath tries to access/write to
> /var/run/cluster/mpath.devices, but nothing else creates that directory
> (and it seems that fence_mpath tries to read from it before writing it
> out).
>

File mpath.devices is created during 'unfence' (ON) action, it is very
similar to fence_scsi where unfence is required as well.


Anybody using fence_mpath as a STONITH device with pacemaker/corosync on
> CentOS 7?
>

This is my testing scenario:

1) working multipath
2) add name to the multipath device [for every node]
  * multipath -l (will get you WWID of device)
  * in /etc/multipath.conf
  * uncomment multipaths, multipath section and set WWID & alias
  * in this example [yellow]
3) on each node:
  * add reservation_key 0x123 (where 0x123 is a value unique for each node)
  * in this example [0x123, 0x456]
4) on each node; restart multipathd and check if you have /dev/mapper/yellow

node63:

[root@host-063 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 123
Status: OFF
[root@host-063 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 456
Status: OFF
--
node63:

[root@host-063 ~]# fence_mpath -d /dev/mapper/yellow -o on -k 123
Success: Powered ON
[root@host-063 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 123
Status: ON
[root@host-063 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 456
Status: OFF
--
node64:

[root@host-064 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 123
Status: ON
[root@host-064 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 456
Status: OFF
--
node64:
(attempt to fence machine without node itself being unfenced)

[root@host-064 ~]# fence_mpath -d /dev/mapper/yellow -o off -k 123
Failed: Cannot open file "/var/run/cluster/mpath.devices"
-
node64:

[root@host-064 ~]# fence_mpath -d /dev/mapper/yellow -o on -k 456
Success: Powered ON
[root@host-064 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 123
Status: ON
[root@host-064 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 456
Status: ON
[root@host-064 ~]# fence_mpath -d /dev/mapper/yellow -o off -k 123
Success: Powered OFF
[root@host-064 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 123
Status: OFF
[root@host-064 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 456
Status: ON

 m,
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] stonith device locate on same host in active/passive cluster

2017-05-02 Thread Marek Grac
Hi,



On Tue, May 2, 2017 at 3:39 AM, Albert Weng  wrote:

> Hi All,
>
> I have created active/passive pacemaker cluster on RHEL 7.
>
> here is my environment:
> clustera : 192.168.11.1
> clusterb : 192.168.11.2
> clustera-ilo4 : 192.168.11.10
> clusterb-ilo4 : 192.168.11.11
>
> both nodes are connected SAN storage for shared storage.
>
> i used the following cmd to create my stonith devices on each node :
> # pcs -f stonith_cfg stonith create ipmi-fence-node1 fence_ipmilan parms
> lanplus="ture" pcmk_host_list="clustera" pcmk_host_check="static-list"
> action="reboot" ipaddr="192.168.11.10" login=adminsitrator passwd=1234322
> op monitor interval=60s
>
> # pcs -f stonith_cfg stonith create ipmi-fence-node02 fence_ipmilan parms
> lanplus="true" pcmk_host_list="clusterb" pcmk_host_check="static-list"
> action="reboot" ipaddr="192.168.11.11" login=USERID passwd=password op
> monitor interval=60s
>
> # pcs status
> ipmi-fence-node1 clustera
> ipmi-fence-node2 clusterb
>
> but when i failover to passive node, then i ran
> # pcs status
>
> ipmi-fence-node1clusterb
> ipmi-fence-node2clusterb
>
> why both fence device locate on the same node ?
>

When node 'clustera' is down, is there any place where ipmi-fence-node* can
be executed?

If you are worrying that node can not self-fence itself you are right. But
if 'clustera' will become available then attempt to fence clusterb will
work as expected.

m,
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Fence agent for VirtualBox

2017-02-27 Thread Marek Grac
Hi,

On Thu, Feb 23, 2017 at 5:32 PM,  wrote:

> Klaus Wenninger  wrote on 02/23/2017 01:12:19 AM:
>


> > > There is a major issue with current setup in Windows.  You have to
> > > start virtual machines from openssh connection if you wish to manage
> > > them from openssh connection.
> > >
> >
> > Have read about similar issues with openssh on Windows for other
> use-cases
> > and that other ssh-implementations for Windows seem to do better / more
> > userfriendly.
>
> Any idea how Cygwin sshd would work?  I use Cygwin for everything and
> already have sshd running.


No, feel free to test it. But in this case, it looks like openssh is not an
issue. We have found out that only difference in (whoami /all) is fact if
you are LOCAL or not. And it makes sense that ssh connections are nonLocal
but we did not have experience to track it more.

m,
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Fence agent for VirtualBox

2017-02-22 Thread Marek Grac
Hi,

we have added support for a host with Windows but it is not trivial to
setup because of various contexts/privileges.

Install openssh on Windows (tutorial can be found on
http://linuxbsdos.com/2015/07/30/how-to-install-openssh-on-windows-10/)

There is a major issue with current setup in Windows.  You have to start
virtual machines from openssh connection if you wish to manage them from
openssh connection.

So, you have to connect from Windows to very same Windows using ssh and
then run

“/Program Files/Oracle/VirtualBox/VBoxManage.exe” start NAME_OF_VM

Be prepared that you will not see that your machine VM is running in
VirtualBox
management UI.

Afterwards it is enough to add parameter --host-os windows (or
host_os=windows when stdin/pcs is used).

m,

On Wed, Feb 22, 2017 at 11:49 AM, Marek Grac <mg...@redhat.com> wrote:

> Hi,
>
> I have updated fence agent for Virtual Box (upstream git). The main
> benefit is new option --host-os (host_os on stdin) that supports
> linux|macos. So if your host is linux/macos all you need to set is this
> option (and ssh access to a machine). I would love to add a support also
> for windows but I'm not able to run vboxmanage.exe over the openssh. It
> works perfectly from command prompt under same user, so there are some
> privileges issues, if you know how to fix this please let me know.
>
> m,
>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Fence agent for VirtualBox

2017-02-22 Thread Marek Grac
Hi,

I have updated fence agent for Virtual Box (upstream git). The main benefit
is new option --host-os (host_os on stdin) that supports linux|macos. So if
your host is linux/macos all you need to set is this option (and ssh access
to a machine). I would love to add a support also for windows but I'm not
able to run vboxmanage.exe over the openssh. It works perfectly from
command prompt under same user, so there are some privileges issues, if you
know how to fix this please let me know.

m,
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Fence agent for VirtualBox

2017-02-06 Thread Marek Grac
Hi,

I don't have one. But I see a lot of question about fence_vbox in last
days, is there any new material that references it?

m,

On Mon, Feb 6, 2017 at 1:56 PM, Jihed M'selmi 
wrote:

> Hi,
>
> I want set up a pcmk/corosync cluster using couple vbox nodes.
>
> Anyone could.share how to install/configure a fence agent  fence_vbox ?
>
> Cheers
> JM
> --
>
> J.M
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Can fence_vbox ssh-options be configured to use Windows DOS shell?

2017-01-27 Thread Marek Grac
Hi,

command prompt is start of the line after you login to a machine via ssh.
On my machine it is:

login@hostname:~$

because command prompt depends on shell, settings, ...we are starting ssh
to run shell with predefined command prompt: [EXPECT]$. What does not work
on your system

m,

On Thu, Jan 26, 2017 at 6:09 PM, <dur...@mgtsciences.com> wrote:

> Marek Grac <mg...@redhat.com> wrote on 01/26/2017 09:19:41 AM:
>
> > From: Marek Grac <mg...@redhat.com>
> > To: Cluster Labs - All topics related to open-source clustering
> > welcomed <users@clusterlabs.org>
> > Date: 01/26/2017 09:20 AM
> > Subject: Re: [ClusterLabs] Can fence_vbox ssh-options be configured
> > to use Windows DOS shell?
> >
> > Hi,
> >
> > On Thu, Jan 26, 2017 at 5:06 PM, <dur...@mgtsciences.com> wrote:
> > I have Windows 10 running Virtualbox with 2 VMs running Fedora 25.
> > I have followed 'Pacemaker 1.1 Clusters from Scratch' 9th edition up
> > through chapter 7.  It works.  I am now trying to fence VMs.  I use
> > Cygwin ssh daemon and of course bash is default for the options.
> >
> > I have used command below from one of the nodes and get the following
> return.
> >
> > fence_vbox -vv --ip=172.23.93.249 --username=durwin --identity-
> > file=/root/.ssh/id_rsa.pub --password= --plug="node1" --action=off
> >
> > add --verbose please.
> >
> > but it looks, you will have to change --ssh-options so it does not
> > execute /bin/bash; it should be be enough to set it to "". You will
> > also have to set --command-prompt to an appropriate value then.
> >
> > m,
> >
>
> Thank you.  Verbose is set '-vv'.
>
> I added --ssh-options="" to the command, see below.  I do not know how to
> find out what value command-prompt needs.  What do I look for?
>
> fc25> fence_vbox --verbose --ip=172.23.93.249 --username=durwin
> --identity-file=/root/.ssh/id_rsa.pub --password= --plug="node1"
> --action=off --ssh-options=""
> Delay 0 second(s) before logging in to the fence device
> Running command: /usr/bin/ssh  durwin@172.23.93.249 -i
> /root/.ssh/id_rsa.pub -p 22
> Timeout exceeded.
> 
> command: /usr/bin/ssh
> args: ['/usr/bin/ssh', 'durwin@172.23.93.249', '-i',
> '/root/.ssh/id_rsa.pub', '-p', '22']
> buffer (last 100 chars): "durwin@172.23.93.249's password: "
> before (last 100 chars): "durwin@172.23.93.249's password: "
> after: 
> match: None
> match_index: None
> exitstatus: None
> flag_eof: False
> pid: 17014
> child_fd: 6
> closed: False
> timeout: 30
> delimiter: 
> logfile: None
> logfile_read: None
> logfile_send: None
> maxread: 2000
> ignorecase: False
> searchwindowsize: None
> delaybeforesend: 0.05
> delayafterclose: 0.1
> delayafterterminate: 0.1
> searcher: searcher_re:
> 0: re.compile("Enter passphrase for key '/root/.ssh/id_rsa.pub':")
> 1: re.compile("Are you sure you want to continue connecting (yes/no)?")
> 2: re.compile("\[EXPECT\]#\ ")
> Unable to connect/login to fencing device
>
>
> > Delay 0 second(s) before logging in to the fence device
> > Running command: /usr/bin/ssh  durwin@172.23.93.249 -i /root/.ssh/
> > id_rsa.pub -p 22 -t '/bin/bash -c "PS1=\\[EXPECT\\]#\  /bin/bash --
> > noprofile --norc"'
> > Received: Enter passphrase for key '/root/.ssh/id_rsa.pub':
> > Sent:
> >
> > Received:
> > [EXPECT]#
> > Sent: VBoxManage list runningvms
> >
> > Connection timed out
> >
> > I tried the following commands from a DOS shell on the Windows host
> > and commands successfully executed (from Cygwin terminal it fails).
> >
> > VBoxManage controlvm node1 acpipowerbutton
> > VBoxManage startvm node1 --type=gui
> >
> > I am aware that some Windows executables do not communicate with
> > Cygwin terminals.  Is there a way to pass ssh options so that
> > VBoxManage is executed from DOS shell?
> >
> > Thank you,
> >
> > Durwin
> >
> >
> > This email message and any attachments are for the sole use of the
> > intended recipient(s) and may contain proprietary and/or
> > confidential information which may be privileged or otherwise
> > protected from disclosure. Any unauthorized review, use, disclosure
> > or distribution is prohibited. If you are not the intended recipient
> > (s), please contact the sender by reply email and destroy the
> > original message and any copies of the message as well as any
> > attachments to the original message.
> > ___

Re: [ClusterLabs] Can fence_vbox ssh-options be configured to use Windows DOS shell?

2017-01-26 Thread Marek Grac
Hi,

On Thu, Jan 26, 2017 at 5:06 PM,  wrote:

> I have Windows 10 running Virtualbox with 2 VMs running Fedora 25.  I have
> followed 'Pacemaker 1.1 Clusters from Scratch' 9th edition up through
> chapter 7.  It works.  I am now trying to fence VMs.  I use Cygwin ssh
> daemon and of course bash is default for the options.
>
> I have used command below from one of the nodes and get the following
> return.
>
> fence_vbox -vv --ip=172.23.93.249 --username=durwin
> --identity-file=/root/.ssh/id_rsa.pub --password= --plug="node1"
> --action=off
>

add --verbose please.

but it looks, you will have to change --ssh-options so it does not execute
/bin/bash; it should be be enough to set it to "". You will also have to
set --command-prompt to an appropriate value then.

m,


> Delay 0 second(s) before logging in to the fence device
> Running command: /usr/bin/ssh  durwin@172.23.93.249 -i
> /root/.ssh/id_rsa.pub -p 22 -t '/bin/bash -c "PS1=\\[EXPECT\\]#\  /bin/bash
> --noprofile --norc"'
> Received: Enter passphrase for key '/root/.ssh/id_rsa.pub':
> Sent:
>
> Received:
> [EXPECT]#
> Sent: VBoxManage list runningvms
>
> Connection timed out
>
> I tried the following commands from a DOS shell on the Windows host and
> commands successfully executed (from Cygwin terminal it fails).
>
> VBoxManage controlvm node1 acpipowerbutton
> VBoxManage startvm node1 --type=gui
>
> I am aware that some Windows executables do not communicate with Cygwin
> terminals.  Is there a way to pass ssh options so that VBoxManage is
> executed from DOS shell?
>
> Thank you,
>
> Durwin
>
>
> This email message and any attachments are for the sole use of the
> intended recipient(s) and may contain proprietary and/or confidential
> information which may be privileged or otherwise protected from disclosure.
> Any unauthorized review, use, disclosure or distribution is prohibited. If
> you are not the intended recipient(s), please contact the sender by reply
> email and destroy the original message and any copies of the message as
> well as any attachments to the original message.
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] How to Fence Virtualbox VM with Windows 10 as host.

2017-01-25 Thread Marek Grac
Hi,

On Tue, Jan 24, 2017 at 9:06 PM,  wrote:

> This is my first attempt at clustering, just so you know the level
> required to convey ideas.
>
> I have Windows 10 running Virtualbox with 2 VMs running Fedora 25.  I have
> followed 'Pacemaker 1.1 Clusters from Scratch' 9th edition up through
> chapter 7.  It works.  I am uncertain as how to fence the VMs with Windows
> 10 as host.  The output from 'pcs stonith describe fence_vbox' is below.
>
> I have Cygwin installed with sshd configured and running.  I can remotely
> ssh into the Windows 10 machine.  I can add the keys from the machines into
> Windows authorized_keys so no user/password is required.  I however do not
> know which of the options are *required*.  Nor do I know what the options
> should be set to.  Some of the options *are* obvious.  If I use *only*
> required ones, ipaddr is obvious, login is obvious, but not sure what port
> is.  Would it be the name of the VM as Virtualbox knows it?
>
> ipaddr (required): IP address or hostname of fencing device
> login (required): Login name
> port (required): Physical plug number on device, UUID or
> identification of machine
>
> Does the host require anything running on it to support the fence?  Do I
> require any other options in addition to 'required'?  How do I test it from
> a nodes commandline?
>

You can take  a look at manual page of fence_vbox (or run fence_vbox
--help).

In your case it should be enough to set:
* ipaddr
* login
* port (= node to shutdown)
* identity_file (= private key)

m,
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] fence-agents 4.0.25 release

2017-01-16 Thread Marek Grac
Welcome to the fence-agents 4.0.25 release

This release includes several bugfixes and features:

* Support for LPAR IVM was added to fence_lpar
* New fence agent for OpenStack's Ironic service
* New fence agent for Azure Resource Manager
* New fence agent for PowerMan
* Major improvement in fence agent for OpenStack Compute
* New option --quiet that disables logging to stderr
* Support for python3

Git repository can be found at https://github.com/ClusterLabs/fence-agents/

The new source tarball can be downloaded here:

https://github.com/ClusterLabs/fence-agents/archive/v4.0.25.tar.gz


To report bugs or issues:

https://bugzilla.redhat.com/

Would you like to meet the cluster team or members of its community?

Join us on IRC (irc.freenode.net #linux-cluster) and share your
experience with other sysadministrators or power users.

Thanks/congratulations to all people that contributed to achieve this
great milestone.

m,
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] fence_apc delay?

2016-09-05 Thread Marek Grac
Hi,

On Mon, Sep 5, 2016 at 3:46 PM, Dan Swartzendruber 
wrote:

> ...
> Marek, thanks.  I have tested repeatedly (8 or so times with disk writes
> in progress) with 5-7 seconds and have had no corruption.  My only issue
> with using power_wait here (possibly I am misunderstanding this) is that
> the default action is 'reboot' which I *think* is 'power off, then power
> on'.  e.g. two operations to the fencing device.  The only place I need a
> delay though, is after the power off operation - doing so after power on is
> just wasted time that the resource is offline before the other node takes
> it over.  Am I misunderstanding this?  Thanks!
>

You are right. Default sequence for reboot is:

get status, power off, delay(power-wait), get status [repeat until OFF],
power on, delay(power-wait), get status [repeat until ON].

The power-wait was introduced because some devices respond with strange
values when they are asked too soon after power change. It was not intended
to be used in a way that you propose. Possible solutions:

*) Configure fence device to not use reboot but OFF, ON
Very same to the situation when there are multiple power circuits; you have
to switch them all OFF and afterwards turn them ON.

*) Add a new option power-wait-off that will be used only in OFF case (and
will override power-wait). It should be quite easy to do. Just, send us PR.

m,
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Unable to Build fence-agents from Source on RHEL6

2016-08-10 Thread Marek Grac
Hi,

* pywsman is required only for fence_amt_ws so if you don't need it, feel
free to remove this dependency (and agent)
* python2.6 (and RHEL6) is no longer a platform that we support at
upstream. We aim for python 2.7 and 3.x currently. But it might work on
python2.6 (and Oyvind accepts patches that fixes it)
* for RHEL6 you might want to use our branch 'RHEL6' that have to work with
python2.6. Maybe you won't get latest features but it should be enough for
most of the installations.

m,

On Tue, Aug 9, 2016 at 10:20 PM, Jason A Ramsey  wrote:

> So, I’ve managed to wade through the majority of dependency hell issues
> I’ve encountered trying to get RPMs built of Pacemaker and its ancillary
> packages. That is, of course, with the exception of the fence-agents source
> tree (grabbed from github). Autogen.sh works great, but when it comes to
> configure’ing the tree, it bombs out on a missing pywsman module. Great, so
> I need to install that. pip install pywsman doesn’t work because of missing
> openwsman libraries, etc. so I go ahead and install the packages I’m pretty
> sure I need using yum (openwsman-client, openwsman-server, libwsman). pip
> install still craps out, so I figure out that the build is looking for the
> openwsman headers. Cool, find (it’s not in the default yum repos)
> libwsman-devel, which ends up requiring sblim-sfcc-devel and probably some
> other stuff I can’t remember any more because my brain is mostly jelly at
> this point… Anyway, finally get the libwsman-devel rpm. Yay! Everything
> should work now, right? Wrong. Here’s the output I now get out of pip
> install pywsman:
>
>
>
> < stupiderrormessage >
>
>
>
> # pip install pywsman
>
> DEPRECATION: Python 2.6 is no longer supported by the Python core team,
> please upgrade your Python. A future version of pip will drop support for
> Python 2.6
>
> Collecting pywsman
>
> /usr/lib/python2.6/site-packages/pip/_vendor/requests/
> packages/urllib3/util/ssl_.py:318: SNIMissingWarning: An HTTPS request
> has been made, but the SNI (Subject Name Indication) extension to TLS is
> not available on this platform. This may cause the server to present an
> incorrect TLS certificate, which can cause validation failures. You can
> upgrade to a newer version of Python to solve this. For more information,
> see https://urllib3.readthedocs.org/en/latest/security.html#
> snimissingwarning.
>
>   SNIMissingWarning
>
> /usr/lib/python2.6/site-packages/pip/_vendor/requests/
> packages/urllib3/util/ssl_.py:122: InsecurePlatformWarning: A true
> SSLContext object is not available. This prevents urllib3 from configuring
> SSL appropriately and may cause certain SSL connections to fail. You can
> upgrade to a newer version of Python to solve this. For more information,
> see https://urllib3.readthedocs.org/en/latest/security.html#
> insecureplatformwarning.
>
>   InsecurePlatformWarning
>
>   Using cached pywsman-2.5.2-1.tar.gz
>
> Building wheels for collected packages: pywsman
>
>   Running setup.py bdist_wheel for pywsman ... error
>
>   Complete output from command /usr/bin/python -u -c "import setuptools,
> tokenize;__file__='/tmp/pip-build-bvG1Jf/pywsman/setup.py'
> ;exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n',
> '\n'), __file__, 'exec'))" bdist_wheel -d /tmp/tmp3R7Zz5pip-wheel-
> --python-tag cp26:
>
>   No version.i.in file found -- Building from sdist.
>
>   /usr/lib/python2.6/site-packages/setuptools/dist.py:364: UserWarning:
> Normalizing '2.5.2-1' to '2.5.2.post1'
>
> normalized_version,
>
>   running bdist_wheel
>
>   running build
>
>   running build_ext
>
>   building '_pywsman' extension
>
>   swigging openwsman.i to openwsman_wrap.c
>
>   swig -python -I/tmp/pip-build-bvG1Jf/pywsman -I/usr/include/openwsman
> -features autodoc -o openwsman_wrap.c openwsman.i
>
>   wsman-client.i:44: Warning(504): Function _WsManClient must have a
> return type.
>
>   wsman-client.i:61: Warning(504): Function _WsManClient must have a
> return type.
>
>   creating build
>
>   creating build/temp.linux-x86_64-2.6
>
>   gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall
> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
> --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv
> -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
> -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic
> -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/tmp/pip-build-bvG1Jf/pywsman
> -I/usr/include/openwsman -I/usr/include/python2.6 -c openwsman.c -o
> build/temp.linux-x86_64-2.6/openwsman.o
>
>   gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall
> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
> --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv
> -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
> -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic
> -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/tmp/pip-build-bvG1Jf/pywsman
> 

Re: [ClusterLabs] fence_vmware_soap: fail to shutdown VMs

2016-07-11 Thread Marek Grac
Hi,

90MB of logs are not a big deal, most of them will just attempt to do same
request again and again. Feel, free to send me a link to this file.

If you have python-suds then it should be enough, you may try a different
version of this package but we don't have any additional 3rd party
dependencies afaik.

m,

On Mon, Jul 4, 2016 at 11:25 AM, Kevin THIERRY <
kevin.thierry.cit...@gmail.com> wrote:

> Thanks a lot for your reply Marek.
>
> Both fence-agents-common and fence-agents-vmware-soap are at version
> 4.0.11-27.
>
> I tried to add --power-timeout but it doesn't matter how long I set the
> power timeout, it always fails after about 4 seconds. If I add -v I end up
> with *a lot* of output (~93MB) which mostly consist of xml. I am thinking
> this is not the kind of output that should be expected. Anyway I tried to
> look for the name of my VM in the logs but it doesn't even appear once.
>
> Here are the first 50 lines of the logs:
>
> ##
>
> # head -n 50 fence-vmware-log.xml
> Delay 0 second(s) before logging in to the fence device
> reading wsdl at: https://10.5.200.20:443/sdk/vimService.wsdl ...
> opening (https://10.5.200.20:443/sdk/vimService.wsdl)
> 
> 
> xmlns="http://schemas.xmlsoap.org/wsdl/;
> <http://schemas.xmlsoap.org/wsdl/>
>xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/;
> <http://schemas.xmlsoap.org/wsdl/soap/>
>xmlns:interface="urn:vim25"
> >
>
>
>   
>  https://localhost/sdk/vimService;
> <https://localhost/sdk/vimService> />
>   
>
> 
>
> sax duration: 1 (ms)
> warning: tns (urn:vim25Service), not mapped to prefix
> importing (vim.wsdl)
> reading wsdl at: https://10.5.200.20:443/sdk/vim.wsdl ...
> opening (https://10.5.200.20:443/sdk/vim.wsdl)
> 
> 
> xmlns="http://schemas.xmlsoap.org/wsdl/;
> <http://schemas.xmlsoap.org/wsdl/>
>xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/;
> <http://schemas.xmlsoap.org/wsdl/mime/>
>xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/;
> <http://schemas.xmlsoap.org/wsdl/soap/>
>xmlns:vim25="urn:vim25"
>xmlns:xsd="http://www.w3.org/2001/XMLSchema;
> <http://www.w3.org/2001/XMLSchema>
> >
>
> targetNamespace="urn:vim25"
>  xmlns="http://www.w3.org/2001/XMLSchema;
> <http://www.w3.org/2001/XMLSchema>
>  xmlns:vim25="urn:vim25"
>  xmlns:xsd="http://www.w3.org/2001/XMLSchema;
> <http://www.w3.org/2001/XMLSchema>
>  xmlns:reflect="urn:reflect"
>  elementFormDefault="qualified"
>   >
>  
>  
>   schemaLocation="reflect-messagetypes.xsd" />
>  
>  
>
> ##
>
> With -v, the error I get at the end of the logs is: "Unable to
> connect/login to fencing device" which is weird since I can get the status
> of a VM without issue...
>
> Could it be something I forgot to install on my machine (a library or
> something else)? I also thought about permissions issues but I am using the
> default root user and I can shutdown VM through vSphere with it.
>
> Ideas about that issue are more than welcome :)
>
> Kevin
>
>
> On 07/04/2016 02:09 PM, Marek Grac wrote:
>
> Hi,
>
> you can try to raise value of --power-timeout from default (20 seconds),
> also you can add -v to have verbose output.
>
> As long as you have same version of fence-agents-common and
> fence-agents-vmware, there should be no issues.
>
> m,
>
>
> On Fri, Jul 1, 2016 at 11:31 AM, Kevin THIERRY <
> kevin.thierry.cit...@gmail.com> wrote:
>
>> Hello !
>>
>> I'm trying to fence my nodes using fence_vmware_soap but it fails to
>> shutdown or reboot my VMs. I can get the list of the VMs on a host or query
>> the status of a specific VM without problem:
>>
>> # fence_vmware_soap -a 10.5.200.20 -l root -p "**" -z --ssl-insecure
>> -4 -n laa-billing-backup -o status
>> /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:769:
>> InsecureRequestWarning:
>> Unverified HTTPS request is being made. Adding certificate verification
>> is strongly advised. See:
>> <https://urllib3.readthedocs.org/en/latest/security.html>
>> https://urllib3.readthedocs.org/en/latest/security.html
>>   InsecureRequestWarning)
>> Status: ON
>>
>> However, trying to shutdown or to reboot a VM fails:
>>
>> # fence_vmware_soap -a 10.5.200.20 -l root -

Re: [ClusterLabs] fence_vmware_soap: fail to shutdown VMs

2016-07-04 Thread Marek Grac
Hi,

you can try to raise value of --power-timeout from default (20 seconds),
also you can add -v to have verbose output.

As long as you have same version of fence-agents-common and
fence-agents-vmware, there should be no issues.

m,


On Fri, Jul 1, 2016 at 11:31 AM, Kevin THIERRY <
kevin.thierry.cit...@gmail.com> wrote:

> Hello !
>
> I'm trying to fence my nodes using fence_vmware_soap but it fails to
> shutdown or reboot my VMs. I can get the list of the VMs on a host or query
> the status of a specific VM without problem:
>
> # fence_vmware_soap -a 10.5.200.20 -l root -p "**" -z --ssl-insecure
> -4 -n laa-billing-backup -o status
> /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:769:
> InsecureRequestWarning:
> Unverified HTTPS request is being made. Adding certificate verification is
> strongly advised. See:
> https://urllib3.readthedocs.org/en/latest/security.html
>   InsecureRequestWarning)
> Status: ON
>
> However, trying to shutdown or to reboot a VM fails:
>
> # fence_vmware_soap -a 10.5.200.20 -l root -p "**" -z --ssl-insecure
> -4 -n laa-billing-backup -o reboot
> /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:769:
> InsecureRequestWarning: Unverified HTTPS request is being made. Adding
> certificate verification is strongly advised. See:
> https://urllib3.readthedocs.org/en/latest/security.html
>   InsecureRequestWarning)
> Failed: Timed out waiting to power OFF
>
> On the ESXi I get the following logs in /var/log/hostd.log:
>
> [LikewiseGetDomainJoinInfo:355] QueryInformation(): ERROR_FILE_NOT_FOUND
> (2/0):
> Accepted password for user root from 10.5.200.12
> 2016-07-01T08:49:50.911Z info hostd[34380B70] [Originator@6876
> sub=Vimsvc.ha-eventmgr opID=47defdf1] Event 190 : User root@10.5.200.12
> logged in as python-requests/2.6.0 CPython/2.7.5
> Linux/3.10.0-327.18.2.el7.x86_64
> 2016-07-01T08:49:50.998Z info hostd[32F80B70] [Originator@6876
> sub=Vimsvc.TaskManager opID=47defdf4 user=root] Task Created :
> haTask--vim.SearchIndex.findByUuid-2513
> 2016-07-01T08:49:50.999Z info hostd[32F80B70] [Originator@6876
> sub=Vimsvc.TaskManager opID=47defdf4 user=root] Task Completed :
> haTask--vim.SearchIndex.findByUuid-2513 Status success
> 2016-07-01T08:49:51.009Z info hostd[32F80B70] [Originator@6876
> sub=Solo.Vmomi opID=47defdf6 user=root] Activation
> [N5Vmomi10ActivationE:0x34603c28] : Invoke done [powerOff] on
> [vim.VirtualMachine:3]
> 2016-07-01T08:49:51.009Z info hostd[32F80B70] [Originator@6876
> sub=Solo.Vmomi opID=47defdf6 user=root] Throw vim.fault.RestrictedVersion
> 2016-07-01T08:49:51.009Z info hostd[32F80B70] [Originator@6876
> sub=Solo.Vmomi opID=47defdf6 user=root] Result:
> --> (vim.fault.RestrictedVersion) {
> -->faultCause = (vmodl.MethodFault) null,
> -->msg = ""
> --> }
> 2016-07-01T08:49:51.027Z info hostd[34380B70] [Originator@6876
> sub=Vimsvc.ha-eventmgr opID=47defdf7 user=root] Event 191 : User
> root@10.5.200.12 logged out (login time: Friday, 01 July, 2016 08:49:50,
> number of API invocations: 0, user agent: python-requests/2.6.0
> CPython/2.7.5 Linux/3.10.0-327.18.2.el7.x86_64)
>
>
> I am wondering if there is some kind of compatibility issue. I am using
> fence-agents-vmware-soap 4.0.11 on CentOS 7.2.1511 and ESXi 6.0.0 Build
> 2494585.
> Any ideas about that issue?
>
> Best regards,
>
> --
> Kevin THIERRY
> IT System Engineer
>
> CIT Lao Ltd. – A.T.M.
> PO Box 10082
> Vientiane Capital – Lao P.D.R.
> Cell : +856 (0)20 2221 8623kevin.thierry.cit...@gmail.com
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] fence-agents 4.0.23 release

2016-06-29 Thread Marek Grac
Welcome to the fence-agents 4.0.23 release

This release includes several bugfixes and features:

* A lot of changes in fence_compute (OpenStack compute instance)
* Obtain status of nodes from Cisco UCS correctly
* New fence agent for AMT using openwsman
* Python3 support
* Fence agent for PVE can be used by non-root users
* Parallel building and testing of fence agents
* Fix occasional failures of APC fence agent


Git repository can be found at https://github.com/ClusterLabs/fence-agents/

The new source tarball can be downloaded here:

https://github.com/ClusterLabs/fence-agents/archive/v4.0.23.tar.gz


To report bugs or issues:

https://bugzilla.redhat.com/

Would you like to meet the cluster team or members of its community?

Join us on IRC (irc.freenode.net #linux-cluster) and share your
experience with other sysadministrators or power users.

Thanks/congratulations to all people that contributed to achieve this
great milestone.

m,
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org