Hi, I´m using a pacemaker+corosync bundle to run a pound based loadbalancer. After an update on CentOS 6.3 there is some mismatch of the node status. Via crm_mon on one node eveything looks fine while on the other node everything is offline. Everything was fine on CentOS 6.2.
Node powerpound: ============ Last updated: Fri Jul 20 12:04:29 2012 Last change: Thu Jul 19 17:58:31 2012 via crm_attribute on pilotpound Stack: openais Current DC: powerpound - partition with quorum Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14 2 Nodes configured, 2 expected votes 7 Resources configured. ============ Online: [ powerpound pilotpound ] HA_IP_1 (ocf::heartbeat:IPaddr2): Started powerpound HA_IP_2 (ocf::heartbeat:IPaddr2): Started powerpound HA_IP_3 (ocf::heartbeat:IPaddr2): Started powerpound HA_IP_4 (ocf::heartbeat:IPaddr2): Started powerpound HA_IP_5 (ocf::heartbeat:IPaddr2): Started powerpound Clone Set: pingclone [ping-gateway] Started: [ pilotpound powerpound ] Node pilotpound: ============ Last updated: Fri Jul 20 12:04:32 2012 Last change: Thu Jul 19 17:58:17 2012 via crm_attribute on pilotpound Stack: openais Current DC: NONE 2 Nodes configured, 2 expected votes 7 Resources configured. ============ OFFLINE: [ powerpound pilotpound ] from /var/log/messages on pilotpound: Jul 20 12:06:12 pilotpound cib[24755]: warning: cib_peer_callback: Discarding cib_apply_diff message (35909) from powerpound: not in our mem bership Jul 20 12:06:12 pilotpound cib[24755]: warning: cib_peer_callback: Discarding cib_apply_diff message (35910) from powerpound: not in our mem bership how could this happened and what can I do to solve this problem ? Any suggestions are welcome kind regards fatcharly _______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org