Re: [Linux-HA] Tomcat resource agent - PATCH2 - minor script fixes

2011-01-17 Thread Brett Delle Grazie
Hi Dejan,

On 17 January 2011 14:54, Dejan Muhamedagic  wrote:
> Hi Brett,
>
> Long time.

Indeed it is - thank you for the reminder!

This one simply uses here documents for start/stop operations.

-- 
Best Regards,

Brett Delle Grazie
From 1c0a2ef05bfbde930962befd99799d4f6a318231 Mon Sep 17 00:00:00 2001
From: Brett Delle Grazie 
Date: Mon, 17 Jan 2011 22:09:44 +
Subject: [PATCH] Low: tomcat: Use here-documents to simplify start/stop operations

---
 heartbeat/tomcat |   30 +++---
 1 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/heartbeat/tomcat b/heartbeat/tomcat
index 689edc7..671ba82 100755
--- a/heartbeat/tomcat
+++ b/heartbeat/tomcat
@@ -146,14 +146,14 @@ start_tomcat()
 		"$CATALINA_HOME/bin/catalina.sh" start $TOMCAT_START_OPTS \
 			>> "$TOMCAT_CONSOLE" 2>&1 &
 	else
-		su - -s /bin/sh "$RESOURCE_TOMCAT_USER" \
-			-c "export JAVA_HOME=${OCF_RESKEY_java_home};\
-export JAVA_OPTS=-Dname=${TOMCAT_NAME};\
-export CATALINA_HOME=${OCF_RESKEY_catalina_home};\
-export CATALINA_PID=${OCF_RESKEY_catalina_pid};\
-export CATALINA_OPTS=\"${OCF_RESKEY_catalina_opts}\";\
-$CATALINA_HOME/bin/catalina.sh start ${OCF_RESKEY_tomcat_start_opts}" \
-			>> "$TOMCAT_CONSOLE" 2>&1 &
+		cat<<-END_TOMCAT_START | su - -s /bin/sh "$RESOURCE_TOMCAT_USER" >> "$TOMCAT_CONSOLE" 2>&1 &
+			export JAVA_HOME=${OCF_RESKEY_java_home}
+			export JAVA_OPTS=-Dname=${TOMCAT_NAME}
+			export CATALINA_HOME=${OCF_RESKEY_catalina_home}
+			export CATALINA_PID=${OCF_RESKEY_catalina_pid}
+			export CATALINA_OPTS=\"${OCF_RESKEY_catalina_opts}\"
+			$CATALINA_HOME/bin/catalina.sh start ${OCF_RESKEY_tomcat_start_opts}
+END_TOMCAT_START
 	fi
 
 	while true; do
@@ -181,13 +181,13 @@ stop_tomcat()
 			>> "$TOMCAT_CONSOLE" 2>&1 &
 		eval $tomcat_stop_cmd >> "$TOMCAT_CONSOLE" 2>&1
 	else
-		su - -s /bin/sh "$RESOURCE_TOMCAT_USER" \
-			-c "export JAVA_HOME=${OCF_RESKEY_java_home};\
-export JAVA_OPTS=-Dname=${TOMCAT_NAME};\
-export CATALINA_HOME=${OCF_RESKEY_catalina_home};\
-export CATALINA_PID=${OCF_RESKEY_catalina_pid};\
-$CATALINA_HOME/bin/catalina.sh stop" \
-			>> "$TOMCAT_CONSOLE" 2>&1 &
+		cat<<-END_TOMCAT_STOP | su - -s /bin/sh "$RESOURCE_TOMCAT_USER" >> "$TOMCAT_CONSOLE" 2>&1 &
+			export JAVA_HOME=${OCF_RESKEY_java_home}
+			export JAVA_OPTS=-Dname=${TOMCAT_NAME}
+			export CATALINA_HOME=${OCF_RESKEY_catalina_home}
+			export CATALINA_PID=${OCF_RESKEY_catalina_pid}
+			$CATALINA_HOME/bin/catalina.sh stop
+END_TOMCAT_STOP
 	fi
 
 	lapse_sec=0
-- 
1.7.1

___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Re: [Linux-HA] Tomcat resource agent - PATCH2 - minor script fixes

2011-01-17 Thread Dejan Muhamedagic
Hi Brett,

Long time.

On Thu, Jul 15, 2010 at 06:57:13PM +0100, Brett Delle Grazie wrote:
> 
> Hi,
> 
> -Original Message-
> From: Dejan Muhamedagic [mailto:deja...@fastmail.fm]
> Sent: Thu 15/07/2010 15:47
> To: General Linux-HA mailing list
> Subject: Re: [Linux-HA] Tomcat resource agent - PATCH2 - minor script fixes
>  
> Hi,
> 
> On Mon, Jul 12, 2010 at 01:03:05PM +0100, Brett Delle Grazie wrote:
> > Hi,
> > 
> > Another patch for the Tomcat resource agent.
> > 
> > This patch simply:
> > 
> > 1. Removes the 'n' character added after the '\' on the export
> > commands - otherwise this causes "'n' not found" messages to
> > occur in the resource agent log during start and stop
> > operations.
> 
> It'd be cleaner to feed everything on the stdin to the su command:
> 
> cat<> "$TOMCAT_CONSOLE" 2>&1 &
> export JAVA_HOME=${OCF_RESKEY_java_home}
> ...
> $CATALINA_HOME/bin/catalina.sh start ${OCF_RESKEY_tomcat_start_opts}
> EOF
> 
> If you feel like testing this too ...
> 
> BDG: What a good suggestion. Will test and resubmit.
> 
> > 2. Adds a missing background operator (&) to the stop
> > operation. Otherwise the stop operation cannot be monitored by
> > the resource agent
> 
> This is a different issue. I'll split it off.

Any news? Could you submit new versions if you have them
available.

Cheers,

Dejan

> BDG: Fine, no problem - its a trivial fix.
> 
> Thanks,
> 
> Dejan
> 
> > This patch can be applied independently of the documentation
> > patch supplied previously.
> > 
> > I hope this helps
> >
> 
> Thanks,
>  
> Best Regards,
>  
> Brett
> 
> __
> This email has been scanned by the MessageLabs Email Security System.
> For more information please visit http://www.messagelabs.com/email 
> __

> ___
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Fencing : pb about 'dynamic-list'

2011-01-17 Thread Alain.Moulle
Hi Dejan,

Yes  stonith -t external/ipmi ... -S works fine :
/usr/sbin/stonith -t external/ipmi hostname=node2 ipaddr=' ' 
userid='mylogin' passwd='mypass' interface='lan'  -S
stonith: external/ipmi device OK.

I've passed the command just after an attempt to fence :
1295276053 2011 Jan 17 15:54:13 node3 daemon info stonith-ng [4335]: 
info: can_fence_host_with_device: restofencenode2 can not fence node2: 
dynamic-list

I'm running with releases :
pacemaker-1.1.2-7
cluster-glue-1.0.6-1.6

Alain

> Looks ok to me. Did you try this on the command line:
>
> # stonith -t external/ipmi ... -S
>
> If that works, perhaps you found a bug. Do you run the latest
> version?


___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Option 3 : corosync + cpg + cman + mcp

2011-01-17 Thread Andrew Beekhof
Did you make sure to use different values for "nodename:" on both nodes?
Its an easy cut&paste error to make.

Otherwise it looks pretty sane.  What do the logs say?

On Thu, Dec 16, 2010 at 1:46 PM, Alain.Moulle  wrote:
> Hi,
>
> I'm trying to make working the Option 3, but it does not start .
>
> I have two nodes, the network indicated in ringnumber  0 of
> corosync.conf  is working fine.
> Moreover, these two nodes were working fine with option 1 corosync, and
> pacemaker,
> so I just change the corosync.conf in adding the revords
> cluster/clusternodes&cman ,
> service/corosync_man and quorum/quorum_cman , then I execute on both nodes :
> service corosync start
>    => ok on both nodes
> service pacemaker start
>    => ok on both nodes
> and 60s later, crm_mon displays on node chili2 :
> 
> Last updated: Thu Dec 16 13:39:37 2010
> Stack: cman
> Current DC: NONE
> 1 Nodes configured, unknown expected votes
> 0 Resources configured.
> 
>
> Online: [ chili2 ]
>
> and on node chili3 :
>
> 
> Last updated: Thu Dec 16 13:40:10 2010
> Current DC: NONE
> 0 Nodes configured, unknown expected votes
> 0 Resources configured.
> 
>
>
> so it seems that chili2 does not "enter" in the cluster, but I can't
> find the reason ...
> Below are some infos about configuration.
> I think I miss an option somewhere but where ...?
>
> Thanks if you have any idea.
> Alain
>
> cib.xml on chili2:
> cat /var/lib/heartbeat/crm/cib.xml
>  validate-with="pacemaker-1.2" cib-last-written="Thu Dec 16 12:03:44
> 2010" crm_feature_set="3.0.2">
>  
>    
>      
>         value="1.1.2-f059ec7ced7a86f18e5490b67ebf4a0b963bccfe"/>
>         name="cluster-infrastructure" value="cman"/>
>      
>    
>    
>      
>    
>    
>    
>  
>  
>
> cib.xml on chili3 :
>  validate-with="pacemaker-1.2" cib-last-written="Thu Dec 16 12:02:22 2010">
>  
>    
>    
>    
>    
>  
>  
>
> _corosync.conf add-ons versus the corosync.conf used with option 1 :_
> cluster {
>        name : HA
>
>        clusternodes {
>                clusternode {
>                        votes: 1
>                        nodeid: 1
>                        name: chili2
>                }
>                clusternode {
>                        votes: 1
>                        nodeid: 2
>                        name: chili3
>                }
>        }
>        cman {
>                expected_votes: 2
>                cluster_id: 1
>                nodename: chili2
>                two-node: 1
>                max_queued: 10
>        }
> }
> service {
>        name: corosync_cman
>        ver:  0
> }
> quorum {
>        provider: quorum_cman
> }
>
> _and other records remain the same:_
> aisexec {
>        user:   root
>        group:  root
> }
> totem {
>
>        version: 2
>        token:          5000
>        token_retransmits_before_loss_const: 20
>        join:           1000
>        consensus:      7500
>        vsftype:        none
>        max_messages:   20
>        secauth:        off
>        threads:        0
>        clear_node_high_bit: yes
>        rrp_mode :       active
>        interface {
>                ringnumber: 0
>                bindnetaddr: 16.2.0.0
>                mcastaddr: 226.1.1.1
>                mcastport: 5405
>        }
>  }
>  logging {
>        fileline: off
>        to_syslog: yes
>        to_stderr: no
>        syslog_facility: daemon
>        debug: on
>        timestamp: on
>  }
>
>  amf {
>        mode: disabled
>  }
>
>
> ___
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Question about limits around resources

2011-01-17 Thread Andrew Beekhof
On Mon, Dec 13, 2010 at 10:46 AM, Alain.Moulle  wrote:
> Hi Andrew,
>
> Currently, my nodes are being reinstalled with RHEL6 GA, so as soon as
> possible
> I'll execute the same tests , but with the GA releases so :
> pacemaker-1.1.2-7.el6
> corosync-1.2.3-21.el6.x86_64
> and by the way, I'll test also option 3 with corosync + cpg + cman + mcp
>
> If with these GA releases, I have again these two main problems, I'll
> ask you which up to date stable release I could take from cluster-labs
> and re-build in el6 so that we know if problems remain with last stable
> releases ...

I keep fedora up-to-date.
Usually just rebuilding the latest SRPM from their latest distro is a
good bet (fedora builds also default to supporting cman)
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Issues when running Heartbeat on FreeBSD 8.1 RELEASE

2011-01-17 Thread Andrew Beekhof
On Fri, Dec 10, 2010 at 4:26 PM, Kevin Mai  wrote:
> Hi folks,
>
> I'm trying to build a failover solution using FreeBSD 8.1-RELEASE and 
> Heartbeat from ports (v2.1.4-10).
>
> I've already configured heartbeat in the two peers, but once I start the 
> daemon using the /usr/local/etc/rc.d/heartbeat script, either CRM and CIB 
> fail to start.
>
> I've already found out that the issue is appearing with CIB: when the daemon 
> runs CIB it doesn't start, but if I run it using some flags, it starts, and 
> them I'm able to run CRM too.

Does uid 275 and gid 275 exist?
Possibly you have some permission issues that go away when you run the
daemons manually (since you're now running them as root).

>
> IE:
> heartbeat[12539]: 2010/12/10_14:22:14 info: Starting 
> "/usr/local/lib/heartbeat/cib" as uid 275 gid 275 (pid 12539)
> heartbeat[12540]: 2010/12/10_14:22:14 info: Starting 
> "/usr/local/lib/heartbeat/attrd" as uid 275 gid 275 (pid 12540)
> heartbeat[12482]: 2010/12/10_14:22:14 WARN: Managed 
> /usr/local/lib/heartbeat/cib process 12539 exited with return code 2.
> heartbeat[12482]: 2010/12/10_14:22:14 ERROR: Client 
> /usr/local/lib/heartbeat/cib "respawning too fast"
> heartbeat[12541]: 2010/12/10_14:22:14 info: Starting 
> "/usr/local/lib/heartbeat/crmd" as uid 275 gid 275 (pid 12541)
> heartbeat[12482]: 2010/12/10_14:22:14 WARN: Managed 
> /usr/local/lib/heartbeat/attrd process 12540 exited with return code 2.
> heartbeat[12482]: 2010/12/10_14:22:14 ERROR: Client 
> /usr/local/lib/heartbeat/attrd "respawning too fast"
> heartbeat[12482]: 2010/12/10_14:22:14 WARN: Managed 
> /usr/local/lib/heartbeat/crmd process 12541 exited with return code 2.
> heartbeat[12482]: 2010/12/10_14:22:14 ERROR: Client 
> /usr/local/lib/heartbeat/crmd "respawning too fast"
>
> but if I run it from command line
>
> [root@mrefns09 /usr/ports]# /usr/local/lib/heartbeat/cib -s -VVV &
> cib[13338]: 2010/12/10_14:30:49 info: main: Retrieval of a per-action CIB: 
> disabled
> cib[13338]: 2010/12/10_14:30:49 info: retrieveCib: Reading cluster 
> configuration from: /var/lib/heartbeat/crm/cib.xml (digest: 
> /var/lib/heartbeat/crm/cib.xml.sig)
> cib[13338]: 2010/12/10_14:30:49 debug: debug3: file2xml: Reading 3538 bytes 
> from file
> cib[13338]: 2010/12/10_14:30:49 WARN: validate_cib_digest: No on-disk digest 
> present
> cib[13338]: 2010/12/10_14:30:49 debug: update_quorum: CCM quorum: old=(null), 
> new=false
> cib[13338]: 2010/12/10_14:30:49 debug: update_counters: Counters updated by 
> readCibXmlFile
> cib[13338]: 2010/12/10_14:30:49 notice: readCibXmlFile: Enabling DTD 
> validation on the existing (sane) configuration
> cib[13338]: 2010/12/10_14:30:49 info: startCib: CIB Initialization completed 
> successfully
> cib[13338]: 2010/12/10_14:30:49 debug: debug3: init_server_ipc_comms: 
> Listening on: /var/run/heartbeat/crm/cib_callback
> cib[13338]: 2010/12/10_14:30:49 debug: debug3: init_server_ipc_comms: 
> Listening on: /var/run/heartbeat/crm/cib_ro
> cib[13338]: 2010/12/10_14:30:49 debug: debug3: init_server_ipc_comms: 
> Listening on: /var/run/heartbeat/crm/cib_rw
> cib[13338]: 2010/12/10_14:30:49 debug: debug3: init_server_ipc_comms: 
> Listening on: /var/run/heartbeat/crm/cib_rw_syncronous
> cib[13338]: 2010/12/10_14:30:49 debug: debug3: init_server_ipc_comms: 
> Listening on: /var/run/heartbeat/crm/cib_ro_syncronous
> cib[13338]: 2010/12/10_14:30:49 info: cib_init: Starting cib mainloop
>
> [root@mrefns09 /usr/local/lib/heartbeat]# /usr/local/lib/heartbeat/crmd -VVV
> crmd[14877]: 2010/12/10_15:14:28 debug: debug3: main: Enabling coredumps
> crmd[14877]: 2010/12/10_15:14:28 debug: debug2: init_dotfile: actions:trace: 
> digraph "g" {
> crmd[14877]: 2010/12/10_15:14:28 debug: debug2: init_dotfile: actions:trace: 
> size = "30,30"
> crmd[14877]: 2010/12/10_15:14:28 debug: debug2: init_dotfile: actions:trace: 
> graph [
> crmd[14877]: 2010/12/10_15:14:28 debug: debug2: init_dotfile: actions:trace: 
> fontsize = "12"
> crmd[14877]: 2010/12/10_15:14:28 debug: debug2: init_dotfile: actions:trace: 
> fontname = "Times-Roman"
> crmd[14877]: 2010/12/10_15:14:28 debug: debug2: init_dotfile: actions:trace: 
> fontcolor = "black"
> crmd[14877]: 2010/12/10_15:14:28 debug: debug2: init_dotfile: actions:trace: 
> bb = "0,0,398.922306,478.927856"
> crmd[14877]: 2010/12/10_15:14:28 debug: debug2: init_dotfile: actions:trace: 
> color = "black"
> crmd[14877]: 2010/12/10_15:14:28 debug: debug2: init_dotfile: actions:trace: ]
> crmd[14877]: 2010/12/10_15:14:28 debug: debug2: init_dotfile: actions:trace: 
> node [
> crmd[14877]: 2010/12/10_15:14:28 debug: debug2: init_dotfile: actions:trace: 
> fontsize = "12"
> crmd[14877]: 2010/12/10_15:14:28 debug: debug2: init_dotfile: actions:trace: 
> fontname = "Times-Roman"
> crmd[14877]: 2010/12/10_15:14:28 debug: debug2: init_dotfile: actions:trace: 
> fontcolor = "black"
> crmd[14877]: 2010/12/10_15:14:28 debug: debug2: init_dotfile: actions:trace: 
> shape = "ellipse"
> crmd[14877]: 2010/12/10_15

[Linux-HA] Are the Resource Agents POSIX compliant?

2011-01-17 Thread Michele Codutti
Hello, I'm in the process to upgrade from Debian lenny to squeeze (so
from heartbeat 2.1.3 to pacemaker 1.0.9) but from this release the
default shell (only for scripts) is changed from bash to dash.
The difference from bash to dash i that the second one is strictly POSIX
compliant and doesn't support bashisms:
https://wiki.ubuntu.com/DashAsBinSh
So my question is: resource agents (now cluster agents) are POSIX
compliant?

-- 
Michele Codutti
Centro Servizi Informatici e Telematici (CSIT)
Universita' degli Studi di Udine
via Delle Scienze, 208 - 33100 UDINE
tel +39 0432 558928
fax +39 0432 558911
e-mail: michele.codutti at uniud.it

___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Pacemaker & AWS elastic IPs

2011-01-17 Thread Andrew Beekhof
On Wed, Dec 15, 2010 at 9:11 PM, Andrew Miklas  wrote:
> Hi,
>
> On 26-Nov-10, at 1:41 AM, Andrew Beekhof wrote:
>
>>> The problem here is that these spurious node failures cause Pacemaker
>>> to initiate unnecessary resource migrations.  Is it normal for the
>>> cluster to become confused for a while when the network connection to
>>> a node is suddenly restored?
>>
>> Its normal for the CCM (part of heartbeat) and used to be normal for
>> corosync.
>> These days I think corosync does a better job in these scenarios.
>
> Does Pacemaker have an option to hold off on running the resource
> reassignments until the node membership as reported by Heartbeat /
> Corosync has stabilized?

Define stable?
Basically one should configure this in corosync/heartbeat - at the
point they report a node gone it really should be.

If they're reporting other nodes disappearing, then thats a bug in
their software.

> Having a rejoining node cause the cluster to
> incorrectly drop good nodes seems like it's a big problem for
> installations that don't use STONITH to keep a once-dead node down.

There is a reason Red Hat and SUSE don't support installations that
don't use STONITH ;-)

>
>
>>>  Or is this happening because using
>>> iptables is not a fair test of how the system will respond during a
>>> network split?
>>
>> Unless you've got _really_ quick hands, you're also creating an
>> asymmetric network split.
>> ie. A can see B but B can't see A.
>>
>> This would be causing additional confusion at the messaging level.
>
> I retested using ifdown / ifup to be sure I was getting a consistent
> split of the network.  Unfortunately, it still looks like it's a
> problem.
>
> Just in case it's helpful,

Not really I'm afraid.
Its a well known bug in the CCM - the problem is no-one knows how it works.

Honestly, try corosync instead.

> here's a snippet from the logs showing node
> "test2" returning after a network outage.  You'll see that the node
> "test1" is incorrectly marked as failed for a while, even though the
> nodes test1, test3, and the test1 <-> test3 network connection was up
> at all times.  These logs are from an Ubuntu 10.04 system running the
> default versions of Pacemaker (1.0.8+hg15494-2ubuntu2) and Heartbeat
> (3.0.3-1ubuntu1).
>
> Dec 15 02:42:16 test3 heartbeat: [1109]: CRIT: Cluster node test2
> returning after partition.
> Dec 15 02:42:16 test3 heartbeat: [1109]: info: For information on
> cluster partitions, See URL: http://linux-ha.org/wiki/Split_Brain
> Dec 15 02:42:16 test3 heartbeat: [1109]: WARN: Deadtime value may be
> too small.
> Dec 15 02:42:16 test3 heartbeat: [1109]: info: See FAQ for information
> on tuning deadtime.
> Dec 15 02:42:16 test3 heartbeat: [1109]: info: URL: 
> http://linux-ha.org/wiki/FAQ#Heavy_Load
> Dec 15 02:42:16 test3 heartbeat: [1109]: info: Link test2:eth0 up.
> Dec 15 02:42:16 test3 heartbeat: [1109]: WARN: Late heartbeat: Node
> test2: interval 125220 ms
> Dec 15 02:42:16 test3 heartbeat: [1109]: info: Status update for node
> test2: status active
> Dec 15 02:42:16 test3 cib: [1124]: WARN: cib_peer_callback: Discarding
> cib_apply_diff message (6ba) from test2: not in our membership
> Dec 15 02:42:16 test3 crmd: [1128]: notice: crmd_ha_status_callback:
> Status update: Node test2 now has status [active] (DC=true)
> Dec 15 02:42:16 test3 crmd: [1128]: info: crm_update_peer_proc:
> test2.ais is now online
> Dec 15 02:42:17 test3 ccm: [1123]: debug: quorum plugin: majority
> Dec 15 02:42:17 test3 ccm: [1123]: debug: cluster:linux-ha,
> member_count=1, member_quorum_votes=100
> Dec 15 02:42:17 test3 ccm: [1123]: debug: total_node_count=3,
> total_quorum_votes=300
> Dec 15 02:42:17 test3 cib: [1124]: WARN: cib_peer_callback: Discarding
> cib_apply_diff message (6bc) from test2: not in our membership
> Dec 15 02:42:17 test3 cib: [1124]: info: mem_handle_event: Got an
> event OC_EV_MS_INVALID from ccm
> Dec 15 02:42:17 test3 cib: [1124]: info: mem_handle_event: no
> mbr_track info
> Dec 15 02:42:17 test3 cib: [1124]: info: mem_handle_event: Got an
> event OC_EV_MS_INVALID from ccm
> Dec 15 02:42:17 test3 cib: [1124]: info: mem_handle_event:
> instance=33, nodes=1, new=0, lost=1, n_idx=0, new_idx=1, old_idx=4
> Dec 15 02:42:17 test3 cib: [1124]: info: cib_ccm_msg_callback:
> Processing CCM event=INVALID (id=33)
> Dec 15 02:42:17 test3 cib: [1124]: info: crm_update_peer: Node test1:
> id=0 state=lost (new) addr=(null) votes=-1 born=30 seen=32
> proc=0302
> Dec 15 02:42:17 test3 crmd: [1128]: info: mem_handle_event: Got an
> event OC_EV_MS_INVALID from ccm
> Dec 15 02:42:17 test3 crmd: [1128]: info: mem_handle_event: no
> mbr_track info
> Dec 15 02:42:17 test3 crmd: [1128]: info: mem_handle_event: Got an
> event OC_EV_MS_INVALID from ccm
> Dec 15 02:42:17 test3 crmd: [1128]: info: mem_handle_event:
> instance=33, nodes=1, new=0, lost=1, n_idx=0, new_idx=1, old_idx=4
> Dec 15 02:42:17 test3 crmd: [1128]: info: crmd_ccm_msg_callback:
> Quorum lost after e