Sorry about the garbled email. Trying again with plain text.


A working cluster running on Pacemaker 1.0.12 on RHEL5 has been copied with 
minimal modifications to Pacemaker 1.1.10 on Ubuntu 14.04. The version string 
is "1.1.10+git20130802-1ubuntu2.3".

We run simple active/standby two-node clusters. 

There are four resources on each node:
- a stateful resource (Encryptor) representing a process in either active or 
standby mode.
-- this process does not maintain persistent data.
- a clone resource (CredProxy) representing a helper process.
- two clone resources (Ingress, Egress) representing network interfaces.

Colocation constraints require that all three clone resources must be in 
Started role in order for the stateful Encryptor resource to be in Master role.

The full configuration is at the end of this message.

The Encryptor resource should fail over on these events:
- active node (i.e. node containing active Encryptor process) goes down
- active Encryptor process goes down and cannot be restarted
- auxiliary CredProxy process on active node goes down and cannot be restarted
- either interface on active node goes down

All of these events trigger failover on the old platform (Pacemaker 1.0 on 
RHEL5).

However, on the new platform (Pacemaker 1.1 on Ubuntu) neither interface 
failure nor auxiliary process failure trigger failover. Pacemaker goes into a 
loop where it starts and stops the active Encryptor resource and never promotes 
the standby Encryptor resource. Cleaning up the failed resource manually and 
issuing "crm_resource --cleanup" clears the jam and the standby Encryptor 
resource is promoted. So does taking the former active node offline completely.

The pe-input-X.bz2 files show this sequence:

(EncryptBase:1 is active, EncryptBase:0 is standby)

T: pacemaker recognizes that Ingress has failed
transition: recover Ingress on active node

T+1: transition: recover Ingress on active node

T+2: transition: recover Ingress on active node

T+3: transitions: promote EncryptBase:0, demote EncryptBase:1, stop Ingress on 
active node (no-op)

T+4: EncryptBase:1 demoted (both clones are now in slave mode), Ingress stopped
transitions: promote  EncryptBase:0, stop EncryptBase:1

T+5: EncryptBase:1 stopped, EncryptBase:0 still in slave role
transitions: promote EncryptBase:0, start EncryptBase:1

T+6: EncryptBase:1 started (slave role)
transitions: promote EncryptBase:0, stop EncryptBase:1

The last two steps repeat. Although pengine has decided that EncryptBase:0 
should be promoted, Pacemaker keeps stopping and starting EncryptBase:1 (the 
one on the node with the failed interface) without ever promoting EncryptBase:0.

More precisely, crmd never issues the command that would cause promotion. For a 
normal promotion, I see a sequence like this:

2017-01-12T20:04:39.887154+00:00 encryptor4 pengine[2201]:   notice: 
LogActions: Promote EncryptBase:0  (Slave -> Master encryptor4)
2017-01-12T20:04:39.888018+00:00 encryptor4 pengine[2201]:   notice: 
process_pe_message: Calculated Transition 3: 
/var/lib/pacemaker/pengine/pe-input-3.bz2
2017-01-12T20:04:39.888428+00:00 encryptor4 crmd[2202]:   notice: 
te_rsc_command: Initiating action 9: promote EncryptBase_promote_0 on 
encryptor4 (local)
2017-01-12T20:04:39.903827+00:00 encryptor4 Encryptor_ResourceAgent: INFO: 
Promoting Encryptor.
2017-01-12T20:04:44.959804+00:00 encryptor4 crmd[2202]:   notice: 
process_lrm_event: LRM operation EncryptBase_promote_0 (call=42, rc=0, 
cib-update=43, confirmed=true) ok

in which crmd initiates an action for promotion and the RA logs a message 
indicating that it was called with the arg "promote".

In contrast, the looping sections look like this:

(EncryptBase:1 on encryptor5 is the active/Master instance, EncryptBase:0 on 
encryptor4 is the standby/Slave instance)

2017-01-12T20:12:36.548980+00:00 encryptor4 pengine[2201]:   notice: 
LogActions: Promote EncryptBase:0        (Slave -> Master encryptor4)
2017-01-12T20:12:36.549005+00:00 encryptor4 pengine[2201]:   notice: 
LogActions: Stop    EncryptBase:1        (encryptor5)
2017-01-12T20:12:36.550306+00:00 encryptor4 pengine[2201]:   notice: 
process_pe_message: Calculated Transition 15: 
/var/lib/pacemaker/pengine/pe-input-15.bz2
2017-01-12T20:12:36.550958+00:00 encryptor4 crmd[2202]:   notice: 
te_rsc_command: Initiating action 14: stop EncryptBase_stop_0 on encryptor5
2017-01-12T20:12:38.649416+00:00 encryptor4 crmd[2202]:   notice: run_graph: 
Transition 15 (Complete=3, Pending=0, Fired=0, Skipped=4, Incomplete=1, 
Source=/var/lib/pacemaker/pengine/pe-input-15.bz2): Stopped


2017-01-12T20:12:38.655686+00:00 encryptor4 pengine[2201]:   notice: 
LogActions: Promote EncryptBase:0        (Slave -> Master encryptor4)
2017-01-12T20:12:38.655706+00:00 encryptor4 pengine[2201]:   notice: 
LogActions: Start   EncryptBase:1        (encryptor5)
2017-01-12T20:12:38.656696+00:00 encryptor4 pengine[2201]:   notice: 
process_pe_message: Calculated Transition 16: 
/var/lib/pacemaker/pengine/pe-input-16.bz2
2017-01-12T20:12:38.657426+00:00 encryptor4 crmd[2202]:   notice: 
te_rsc_command: Initiating action 14: start EncryptBase_start_0 on encryptor5
2017-01-12T20:12:43.790672+00:00 encryptor4 crmd[2202]:   notice: run_graph: 
Transition 16 (Complete=3, Pending=0, Fired=0, Skipped=4, Incomplete=1, 
Source=/var/lib/pacemaker/pengine/pe-input-16.bz2): Stopped

The promote action is never initiated, and the RA promote operation is never 
called. But the normal logs don't explain why.

The debug logs are overwhelming. Even if the answer is in there, I don’t think 
I can extract it from all the detail. I’ve tried looking at the dot files 
created by crm_simulate but they either aren’t being generated correctly or 
aren’t rendering correctly on my machine.

Any ideas what we need to do to get failover working again? I’m fine with 
leaving the formerly active node in a state where it requires manual 
intervention to get it going again. I just want the standby node to take over 
if an interface goes down.

Here is the configuration:

<configuration>
<crm_config>
<cluster_property_set id="cib-bootstrap-options">
<nvpair name="stonith-enabled" value="false" 
id="cib-bootstrap-options-stonith-enabled"/>
<nvpair name="no-quorum-policy" value="ignore" 
id="cib-bootstrap-options-no-quorum-policy"/>
<nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" 
value="1484336062"/>
</cluster_property_set>
</crm_config>
<nodes>
<node id="3232262401" uname="encryptor4"/>
<node id="3232262402" uname="encryptor5"/>
</nodes>
<resources>
<master id="Encryptor">
<meta_attributes id="Encryptor-meta_attributes">
<nvpair name="clone-max" value="2" id="Encryptor-meta_attributes-clone-max"/>
<nvpair name="clone-node-max" value="1" 
id="Encryptor-meta_attributes-clone-node-max"/>
<nvpair name="master-max" value="1" id="Encryptor-meta_attributes-master-max"/>
<nvpair name="notify" value="false" id="Encryptor-meta_attributes-notify"/>
<nvpair name="target-role" value="Master" 
id="Encryptor-meta_attributes-target-role"/>
</meta_attributes>
<primitive id="EncryptBase" class="ocf" provider="fnord" type="encryptor">
<operations>
<op name="start" interval="0s" timeout="20s" id="EncryptBase-start-0s"/>
<op name="monitor" interval="1s" role="Master" timeout="2s" 
id="EncryptBase-monitor-1s"/>
<op name="monitor" interval="2s" role="Slave" timeout="2s" 
id="EncryptBase-monitor-2s"/>
</operations>
</primitive>
</master>
<clone id="CredProxy">
<meta_attributes id="CredProxy-meta_attributes">
<nvpair name="clone-max" value="2" id="CredProxy-meta_attributes-clone-max"/>
<nvpair name="clone-node-max" value="1" 
id="CredProxy-meta_attributes-clone-node-max"/>
<nvpair name="notify" value="false" id="CredProxy-meta_attributes-notify"/>
<nvpair name="target-role" value="Started" 
id="CredProxy-meta_attributes-target-role"/>
</meta_attributes>
<primitive id="CredBase" class="ocf" provider="fnord" type="credproxy">
<operations>
<op name="start" interval="0s" timeout="20s" id="CredBase-start-0s"/>
<op name="monitor" interval="1s" timeout="2s" id="CredBase-monitor-1s"/>
</operations>
</primitive>
</clone>
<clone id="Ingress">
<meta_attributes id="Ingress-meta_attributes">
<nvpair name="clone-max" value="2" id="Ingress-meta_attributes-clone-max"/>
<nvpair name="clone-node-max" value="1" 
id="Ingress-meta_attributes-clone-node-max"/>
<nvpair name="notify" value="false" id="Ingress-meta_attributes-notify"/>
<nvpair name="target-role" value="Started" 
id="Ingress-meta_attributes-target-role"/>
</meta_attributes>
<primitive id="IngressBase" class="ocf" provider="fnord" type="interface">
<operations>
<op name="start" interval="0s" timeout="5s" id="IngressBase-start-0s"/>
<op name="monitor" interval="1s" timeout="1s" id="IngressBase-monitor-1s"/>
</operations>
<instance_attributes id="IngressBase-instance_attributes">
<nvpair name="interface" value="em1" 
id="IngressBase-instance_attributes-interface"/>
<nvpair name="label" value="ingress" 
id="IngressBase-instance_attributes-label"/>
<nvpair name="min_retries" value="5" 
id="IngressBase-instance_attributes-min_retries"/>
<nvpair name="max_retries" value="100" 
id="IngressBase-instance_attributes-max_retries"/>
</instance_attributes>
</primitive>
</clone>
<clone id="Egress">
<meta_attributes id="Egress-meta_attributes">
<nvpair name="clone-max" value="2" id="Egress-meta_attributes-clone-max"/>
<nvpair name="clone-node-max" value="1" 
id="Egress-meta_attributes-clone-node-max"/>
<nvpair name="notify" value="false" id="Egress-meta_attributes-notify"/>
<nvpair name="target-role" value="Started" 
id="Egress-meta_attributes-target-role"/>
</meta_attributes>
<primitive id="EgressBase" class="ocf" provider="fnord" type="interface">
<operations>
<op name="start" interval="0s" timeout="5s" id="EgressBase-start-0s"/>
<op name="monitor" interval="1s" timeout="1s" id="EgressBase-monitor-1s"/>
</operations>
<instance_attributes id="EgressBase-instance_attributes">
<nvpair name="interface" value="em2" 
id="EgressBase-instance_attributes-interface"/>
<nvpair name="label" value="egress" id="EgressBase-instance_attributes-label"/>
<nvpair name="min_retries" value="5" 
id="EgressBase-instance_attributes-min_retries"/>
<nvpair name="max_retries" value="100" 
id="EgressBase-instance_attributes-max_retries"/>
</instance_attributes>
</primitive>
</clone>
</resources>
<constraints>
<rsc_colocation id="encryptor-with-credproxy" score="INFINITY" rsc="Encryptor" 
rsc-role="Master" with-rsc="CredProxy" with-rsc-role="Started"/>
<rsc_colocation id="encryptor-with-ingress" score="INFINITY" rsc="Encryptor" 
rsc-role="Master" with-rsc="Ingress" with-rsc-role="Started"/>
<rsc_colocation id="encryptor-with-egress" score="INFINITY" rsc="Encryptor" 
rsc-role="Master" with-rsc="Egress" with-rsc-role="Started"/>
</constraints>
<rsc_defaults>
<meta_attributes id="rsc-options">
<nvpair name="resource-stickiness" value="10" 
id="rsc-options-resource-stickiness"/>
</meta_attributes>
</rsc_defaults>
</configuration>


Thanks for your time.

_______________________________________________
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to