I have got the SNMP subagent from pacemaker-mgmt 2.1.2 working with corosync 2.3 and pacemaker 1.1.10.
Some modification are implemented because of wrong attach method to CIB and one nasty bug, where hbagent crashes, when it does not find an operation on parsing a change.
As for all versions of
The resource agent was developed by Stefan Wenk an me.
Plan is to include it into GIT Repo resource-agents by pull request after some short testing period outside or own labs.
Rainer
Gesendet:Donnerstag, 30. Januar 2014 um 00:25 Uhr
Von:Vladimir Broz vladik...@centrum.cz
An:The Pacemaker
Am Donnerstag, 30. Januar 2014, 10:16:35 schrieb Rainer Brestan:
I have got the SNMP subagent from pacemaker-mgmt 2.1.2 working with corosync
2.3 and pacemaker 1.1.10. Some modification are implemented because of
wrong attach method to CIB and one nasty bug, where hbagent crashes, when
it does
On 2014-01-29T21:42:52, Yogesh Patil yogeshpatil...@gmail.com wrote:
I am creating a clone of a group. Although clone-max is 2, it creates
resources on all nodes in the cluster and then convert all but 2 to orphan
stop state(crm_mon output). I want to ommit such state but all I get is
stopped
Hello,
I am coming up short in my searches, but I don't know exactly what I am
searching for, hoping someone could point me in the right direction.
I have Pacemaker setup in active/passive on my Email server. The systems are in
sync using DRBD.
When there is a failure on node-1, everything
Hi !
I recently changed hosting platform versions for my PCMK clusters, from
RHEL6.0 equivalent towards SL6.4. Also changed from pcmk 1.1.2+corosync
1.3.3 to pcmk 1.1.10+corosync 1.4.1 that come with SL6.
Until now, I used to manage my pcmk+corosync layers directly from my own
personal init
I solved my problem by making some hacktacular LSB script called proxyres.
When I run a service proxyres start, it SSHes and runs those restart commands
on the Proxy.
service proxyres stop simply exits with 0
and service proxyres status will netcat a port, 0 success 3 for failure.
It
Hi everyone,
I am running a two-node cluster which hosts two Xen VMs. We're using
DRBD, but it's managed directly from Xen.
The configuration of one of this resources is as follows:
primitive xen-vm1 ocf:heartbeat:Xen
params xmfile=/etc/xen/vm1.cfg
op monitor interval=30s
Hi Andrew,
It became late.
I registered this problem by Bugzilla.
The report file is attached, too.
* http://bugs.clusterlabs.org/show_bug.cgi?id=5194
Best Regards,
Hideo Yamauchi.
--- On Tue, 2014/1/14, Andrew Beekhof and...@beekhof.net wrote:
On 14 Jan 2014, at 4:33 pm,
Hi, all
I measure the performance of Pacemaker in the following combinations.
Pacemaker-1.1.11.rc1
libqb-0.16.0
corosync-2.3.2
All nodes are KVM virtual machines.
stopped the node of vm01 compulsorily from the inside, after starting 14
nodes.
virsh destroy vm01 was used for the stop.
Then, in
10 matches
Mail list logo