I feel the behavior has become worse after adding reverse co-location constraint. I started with this. And it was all I wanted it to be. cu_5 <-> Redund_CU1_WB30 cu_4 <-> Redund_CU2_WB30 cu_3 <-> Redund_CU3_WB30 cu_2 <-> Redund_CU5_WB30
However for some reason pacemaker decided to move cu_2 from Redund_CU5_WB30 to Redund_CU2_WB30. Any obvious mis-configuration? *Logs on DC:* Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: native_print: cu_5 (ocf::redundancy:RedundancyRA): Started Redund_CU1_WB30 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: native_print: cu_4 (ocf::redundancy:RedundancyRA): Started Redund_CU2_WB30 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: native_print: cu_3 (ocf::redundancy:RedundancyRA): Started Redund_CU3_WB30 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: native_print: cu_2 (ocf::redundancy:RedundancyRA): Started Redund_CU5_WB30 Oct 14 16:30:52 [7362] Redund_CU1_WB30 cib: info: cib_file_backup: Archived previous version as /dev/shm/lib/pacemaker/cib/cib-65.raw Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_3 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_3 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_4 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_3 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_3 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_5 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_3 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_5 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_5 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_4 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_5 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_5 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_4 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_3 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_4 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_4 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_5 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_4 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_3 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_5 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_5 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Rolling back scores from cu_3 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_5 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Breaking dependency loop at cu_3 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Rolling back scores from cu_5 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Breaking dependency loop at cu_3 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Breaking dependency loop at cu_3 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Breaking dependency loop at cu_5 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Rolling back scores from cu_3 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Breaking dependency loop at cu_4 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Breaking dependency loop at cu_5 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Breaking dependency loop at cu_4 Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: LogActions: Leave cu_5 (Started Redund_CU1_WB30) Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: LogActions: Leave cu_4 (Started Redund_CU2_WB30) Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: LogActions: Leave cu_3 (Started Redund_CU3_WB30) Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: info: LogActions: Leave cu_2 (Started Redund_CU5_WB30) Oct 14 16:30:52 [7367] Redund_CU1_WB30 crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Oct 14 16:30:52 [7367] Redund_CU1_WB30 crmd: info: do_te_invoke: Processing graph 302 (ref=pe_calc-dc-1476462652-376) derived from /dev/shm/lib/pacemaker/pengine/pe-input-302.bz2 Oct 14 16:30:52 [7367] Redund_CU1_WB30 crmd: notice: run_graph: Transition 302 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/dev/shm/lib/pacemaker/pengine/pe-input-302.bz2): Complete Oct 14 16:30:52 [7367] Redund_CU1_WB30 crmd: info: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE Oct 14 16:30:52 [7367] Redund_CU1_WB30 crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ] Oct 14 16:30:52 [7366] Redund_CU1_WB30 pengine: notice: process_pe_message: Calculated Transition 302: /dev/shm/lib/pacemaker/pengine/pe-input-302.bz2 Oct 14 16:30:52 [7362] Redund_CU1_WB30 cib: info: cib_file_write_with_digest: Wrote version 0.343.0 of the CIB to disk (digest: 091305e2053f9d31f73fc63ded289df4) Oct 14 16:30:52 [7362] Redund_CU1_WB30 cib: info: cib_file_write_with_digest: Reading cluster configuration file /dev/shm/lib/pacemaker/cib/cib.LWzdZL (digest: /dev/shm/lib/pacemaker/cib/cib.q7suYr) Oct 14 16:30:53 [7362] Redund_CU1_WB30 cib: info: cib_perform_op: Diff: --- 0.343.0 2 Oct 14 16:30:53 [7362] Redund_CU1_WB30 cib: info: cib_perform_op: Diff: +++ 0.344.0 (null) Oct 14 16:30:53 [7362] Redund_CU1_WB30 cib: info: cib_perform_op: + /cib: @epoch=344 Oct 14 16:30:53 [7362] Redund_CU1_WB30 cib: info: cib_perform_op: ++ /cib/configuration/constraints: <rsc_colocation id="colocation-cu_5-cu_2-INFINITY" rsc="cu_5" score="-INFINITY" with-rsc="cu_2"/> Oct 14 16:30:53 [7367] Redund_CU1_WB30 crmd: info: abort_transition_graph: Transition aborted by rsc_colocation.colocation-cu_5-cu_2-INFINITY 'create': Non-status change (cib=0.344.0, source=te_update_diff:436, path=/cib/configuration/constraints, 1) Oct 14 16:30:53 [7367] Redund_CU1_WB30 crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ] Oct 14 16:30:53 [7362] Redund_CU1_WB30 cib: info: cib_process_request: Completed cib_replace operation for section configuration: OK (rc=0, origin=Redund_CU5_WB30/cibadmin/2, version=0.344.0) Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: notice: unpack_config: Relying on watchdog integration for fencing Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: notice: unpack_config: On loss of CCM Quorum: Ignore Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: determine_online_status: Node Redund_CU1_WB30 is online Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: determine_online_status: Node Redund_CU2_WB30 is online Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: determine_online_status: Node Redund_CU3_WB30 is online Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: determine_online_status: Node Redund_CU5_WB30 is online Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: native_print: cu_5 (ocf::redundancy:RedundancyRA): Started Redund_CU1_WB30 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: native_print: cu_4 (ocf::redundancy:RedundancyRA): Started Redund_CU2_WB30 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: native_print: cu_3 (ocf::redundancy:RedundancyRA): Started Redund_CU3_WB30 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: native_print: cu_2 (ocf::redundancy:RedundancyRA): Started Redund_CU5_WB30 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_2: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_2: Breaking dependency loop at cu_3 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_2: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_2: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_2: Breaking dependency loop at cu_4 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_2: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_2: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_3 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_3 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_4 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Rolling back scores from cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_3 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_4 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_4: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_4 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_3 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_4 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_4 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_4 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_3 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Rolling back scores from cu_3 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_3: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Breaking dependency loop at cu_2 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Rolling back scores from cu_4 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7362] Redund_CU1_WB30 cib: info: cib_file_backup: Archived previous version as /dev/shm/lib/pacemaker/cib/cib-66.raw Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Rolling back scores from cu_3 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Rolling back scores from cu_4 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Breaking dependency loop at cu_3 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Breaking dependency loop at cu_3 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Rolling back scores from cu_3 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Breaking dependency loop at cu_4 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Breaking dependency loop at cu_5 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: rsc_merge_weights: cu_5: Breaking dependency loop at cu_4 *Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: RecurringOp: Start recurring monitor (30s) for cu_2 on Redund_CU2_WB30* Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: LogActions: Leave cu_5 (Started Redund_CU1_WB30) Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: LogActions: Leave cu_4 (Started Redund_CU2_WB30) Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: info: LogActions: Leave cu_3 (Started Redund_CU3_WB30) *Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: notice: LogActions: Move cu_2 (Started Redund_CU5_WB30 -> Redund_CU2_WB30)* Oct 14 16:30:53 [7362] Redund_CU1_WB30 cib: info: cib_file_write_with_digest: Wrote version 0.344.0 of the CIB to disk (digest: c0090fdd0254bfc0cd81d0bbc8bc0a72) Oct 14 16:30:53 [7362] Redund_CU1_WB30 cib: info: cib_file_write_with_digest: Reading cluster configuration file /dev/shm/lib/pacemaker/cib/cib.znavqE (digest: /dev/shm/lib/pacemaker/cib/cib.eXT5e7) Oct 14 16:30:53 [7367] Redund_CU1_WB30 crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Oct 14 16:30:53 [7367] Redund_CU1_WB30 crmd: info: do_te_invoke: Processing graph 303 (ref=pe_calc-dc-1476462653-377) derived from /dev/shm/lib/pacemaker/pengine/pe-input-303.bz2 Oct 14 16:30:53 [7367] Redund_CU1_WB30 crmd: notice: te_rsc_command: Initiating action 12: stop cu_2_stop_0 on Redund_CU5_WB30 Oct 14 16:30:53 [7366] Redund_CU1_WB30 pengine: notice: process_pe_message: Calculated Transition 303: /dev/shm/lib/pacemaker/pengine/pe-input-303.bz2 Oct 14 16:30:53 [7362] Redund_CU1_WB30 cib: info: cib_perform_op: Diff: --- 0.344.0 2 Oct 14 16:30:53 [7362] Redund_CU1_WB30 cib: info: cib_perform_op: Diff: +++ 0.344.1 (null) Oct 14 16:30:53 [7362] Redund_CU1_WB30 cib: info: cib_perform_op: + /cib: @num_updates=1 Oct 14 16:30:53 [7362] Redund_CU1_WB30 cib: info: cib_perform_op: + /cib/status/node_state[@id='181462533']/lrm[@id='181462533']/lrm_resources/lrm_resource[@id='cu_2']/lrm_rsc_op[@id='cu_2_last_0']: @operation_key=cu_2_stop_0, @operation=stop, @transition-key=12:303:0:07413883-c6c4-41b8-a68e-8ba4832aa4f8, @transition-magic=0:0;12:303:0:07413883-c6c4-41b8-a68e-8ba4832aa4f8, @call-id=21, @last-run=1476462653, @last-rc-change=1476462653, @exec-time=237 *Oct 14 16:30:53 [7367] Redund_CU1_WB30 crmd: info: match_graph_event: Action cu_2_stop_0 (12) confirmed on Redund_CU5_WB30 (rc=0)* *Oct 14 16:30:53 [7367] Redund_CU1_WB30 crmd: notice: te_rsc_command: Initiating action 13: start cu_2_start_0 on Redund_CU2_WB30* [root@Redund_CU2_WB30 root]# pcs constraint Location Constraints: Resource: cu_2 Enabled on: Redund_CU5_WB30 (score:0) Enabled on: Redund_CU1_WB30 (score:0) Enabled on: Redund_CU3_WB30 (score:0) Enabled on: Redund_CU2_WB30 (score:0) Resource: cu_3 Enabled on: Redund_CU3_WB30 (score:0) Enabled on: Redund_CU1_WB30 (score:0) Enabled on: Redund_CU5_WB30 (score:0) Enabled on: Redund_CU2_WB30 (score:0) Resource: cu_4 Enabled on: Redund_CU2_WB30 (score:0) Enabled on: Redund_CU1_WB30 (score:0) Enabled on: Redund_CU5_WB30 (score:0) Enabled on: Redund_CU3_WB30 (score:0) Resource: cu_5 Enabled on: Redund_CU1_WB30 (score:0) Enabled on: Redund_CU5_WB30 (score:0) Enabled on: Redund_CU3_WB30 (score:0) Enabled on: Redund_CU2_WB30 (score:0) Ordering Constraints: Colocation Constraints: cu_2 with cu_3 (score:-INFINITY) cu_3 with cu_2 (score:-INFINITY) cu_2 with cu_5 (score:-INFINITY) cu_5 with cu_2 (score:-INFINITY) cu_3 with cu_5 (score:-INFINITY) cu_5 with cu_3 (score:-INFINITY) cu_4 with cu_3 (score:-INFINITY) cu_3 with cu_4 (score:-INFINITY) cu_4 with cu_2 (score:-INFINITY) cu_2 with cu_4 (score:-INFINITY) cu_4 with cu_5 (score:-INFINITY) cu_5 with cu_4 (score:-INFINITY) Ticket Constraints: -Thanks Nikhil On Fri, Oct 14, 2016 at 5:26 PM, Nikhil Utane <nikhil.subscri...@gmail.com> wrote: > Hi, > > Thank you for the responses so far. > I added reverse colocation as well. However seeing some other issue in > resource movement that I am analyzing. > > Thinking further on this, why doesn't "*a not with b" does not imply "b > not with a"?* > Coz wouldn't putting "b with a" violate "a not with b"? > > Can someone confirm that colocation is required to be configured both ways? > > -Thanks > Nikhil > > > > On Fri, Oct 14, 2016 at 1:09 PM, Vladislav Bogdanov <bub...@hoster-ok.com> > wrote: > >> On October 14, 2016 10:13:17 AM GMT+03:00, Ulrich Windl < >> ulrich.wi...@rz.uni-regensburg.de> wrote: >> >>>> Nikhil Utane <nikhil.subscri...@gmail.com> schrieb am 13.10.2016 um >> >16:43 in >> >Nachricht >> ><cagnwmjubpucnbgxrohkhsbq0lxovwslfpkupg1r8gjqrfqm...@mail.gmail.com>: >> >> Ulrich, >> >> >> >> I have 4 resources only (not 5, nodes are 5). So then I only need 6 >> >> constraints, right? >> >> >> >> [,1] [,2] [,3] [,4] [,5] [,6] >> >> [1,] "A" "A" "A" "B" "B" "C" >> >> [2,] "B" "C" "D" "C" "D" "D" >> > >> >Sorry for my confusion. As Andrei Borzenkovsaid in >> ><caa91j0w+epahflg9u6vx_x8lgfkf9rp55g3nocy4ozna9bb...@mail.gmail.com> >> >you probably have to add (A, B) _and_ (B, A)! Thinking about it, I >> >wonder whether an easier solution would be using "utilization": If >> >every node has one token to give, and every resource needs on token, no >> >two resources will run on one node. Sounds like an easier solution to >> >me. >> > >> >Regards, >> >Ulrich >> > >> > >> >> >> >> I understand that if I configure constraint of R1 with R2 score as >> >> -infinity, then the same applies for R2 with R1 score as -infinity >> >(don't >> >> have to configure it explicitly). >> >> I am not having a problem of multiple resources getting schedule on >> >the >> >> same node. Rather, one working resource is unnecessarily getting >> >relocated. >> >> >> >> -Thanks >> >> Nikhil >> >> >> >> >> >> On Thu, Oct 13, 2016 at 7:45 PM, Ulrich Windl < >> >> ulrich.wi...@rz.uni-regensburg.de> wrote: >> >> >> >>> Hi! >> >>> >> >>> Don't you need 10 constraints, excluding every possible pair of your >> >5 >> >>> resources (named A-E here), like in this table (produced with R): >> >>> >> >>> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] >> >>> [1,] "A" "A" "A" "A" "B" "B" "B" "C" "C" "D" >> >>> [2,] "B" "C" "D" "E" "C" "D" "E" "D" "E" "E" >> >>> >> >>> Ulrich >> >>> >> >>> >>> Nikhil Utane <nikhil.subscri...@gmail.com> schrieb am 13.10.2016 >> >um >> >>> 15:59 in >> >>> Nachricht >> >>> >> ><cagnwmjw0cwmr3bvr3l9xzcacjuzyczqbzezuzpajxi+pn7o...@mail.gmail.com>: >> >>> > Hi, >> >>> > >> >>> > I have 5 nodes and 4 resources configured. >> >>> > I have configured constraint such that no two resources can be >> >>> co-located. >> >>> > I brought down a node (which happened to be DC). I was expecting >> >the >> >>> > resource on the failed node would be migrated to the 5th waiting >> >node >> >>> (that >> >>> > is not running any resource). >> >>> > However what happened was the failed node resource was started on >> >another >> >>> > active node (after stopping it's existing resource) and that >> >node's >> >>> > resource was moved to the waiting node. >> >>> > >> >>> > What could I be doing wrong? >> >>> > >> >>> > <nvpair id="cib-bootstrap-options-have-watchdog" value="true" >> >>> > name="have-watchdog"/> >> >>> > <nvpair id="cib-bootstrap-options-dc-version" >> >value="1.1.14-5a6cdd1" >> >>> > name="dc-version"/> >> >>> > <nvpair id="cib-bootstrap-options-cluster-infrastructure" >> >>> value="corosync" >> >>> > name="cluster-infrastructure"/> >> >>> > <nvpair id="cib-bootstrap-options-stonith-enabled" value="false" >> >>> > name="stonith-enabled"/> >> >>> > <nvpair id="cib-bootstrap-options-no-quorum-policy" value="ignore" >> >>> > name="no-quorum-policy"/> >> >>> > <nvpair id="cib-bootstrap-options-default-action-timeout" >> >value="240" >> >>> > name="default-action-timeout"/> >> >>> > <nvpair id="cib-bootstrap-options-symmetric-cluster" value="false" >> >>> > name="symmetric-cluster"/> >> >>> > >> >>> > # pcs constraint >> >>> > Location Constraints: >> >>> > Resource: cu_2 >> >>> > Enabled on: Redun_CU4_Wb30 (score:0) >> >>> > Enabled on: Redund_CU2_WB30 (score:0) >> >>> > Enabled on: Redund_CU3_WB30 (score:0) >> >>> > Enabled on: Redund_CU5_WB30 (score:0) >> >>> > Enabled on: Redund_CU1_WB30 (score:0) >> >>> > Resource: cu_3 >> >>> > Enabled on: Redun_CU4_Wb30 (score:0) >> >>> > Enabled on: Redund_CU2_WB30 (score:0) >> >>> > Enabled on: Redund_CU3_WB30 (score:0) >> >>> > Enabled on: Redund_CU5_WB30 (score:0) >> >>> > Enabled on: Redund_CU1_WB30 (score:0) >> >>> > Resource: cu_4 >> >>> > Enabled on: Redun_CU4_Wb30 (score:0) >> >>> > Enabled on: Redund_CU2_WB30 (score:0) >> >>> > Enabled on: Redund_CU3_WB30 (score:0) >> >>> > Enabled on: Redund_CU5_WB30 (score:0) >> >>> > Enabled on: Redund_CU1_WB30 (score:0) >> >>> > Resource: cu_5 >> >>> > Enabled on: Redun_CU4_Wb30 (score:0) >> >>> > Enabled on: Redund_CU2_WB30 (score:0) >> >>> > Enabled on: Redund_CU3_WB30 (score:0) >> >>> > Enabled on: Redund_CU5_WB30 (score:0) >> >>> > Enabled on: Redund_CU1_WB30 (score:0) >> >>> > Ordering Constraints: >> >>> > Colocation Constraints: >> >>> > cu_3 with cu_2 (score:-INFINITY) >> >>> > cu_4 with cu_2 (score:-INFINITY) >> >>> > cu_4 with cu_3 (score:-INFINITY) >> >>> > cu_5 with cu_2 (score:-INFINITY) >> >>> > cu_5 with cu_3 (score:-INFINITY) >> >>> > cu_5 with cu_4 (score:-INFINITY) >> >>> > >> >>> > -Thanks >> >>> > Nikhil >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> _______________________________________________ >> >>> Users mailing list: Users@clusterlabs.org >> >>> http://clusterlabs.org/mailman/listinfo/users >> >>> >> >>> Project Home: http://www.clusterlabs.org >> >>> Getting started: >> >http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >> >>> Bugs: http://bugs.clusterlabs.org >> >>> >> > >> > >> > >> > >> >_______________________________________________ >> >Users mailing list: Users@clusterlabs.org >> >http://clusterlabs.org/mailman/listinfo/users >> > >> >Project Home: http://www.clusterlabs.org >> >Getting started: >> >http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >> >Bugs: http://bugs.clusterlabs.org >> >> Hi, >> >> use of utilization (balanced strategy) has one caveat: resources are not >> moved just because of utilization of one node is less, when nodes have the >> same allocation score for the resource. >> So, after the simultaneus outage of two nodes in a 5-node cluster, it may >> appear that one node runs two resources and two recovered nodes run nothing. >> >> Original 'utilization' strategy only limits resource placement, it is not >> considered when choosing a node for a resource. >> >> Vladislav >> >> >> _______________________________________________ >> Users mailing list: Users@clusterlabs.org >> http://clusterlabs.org/mailman/listinfo/users >> >> Project Home: http://www.clusterlabs.org >> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >> Bugs: http://bugs.clusterlabs.org >> > >
_______________________________________________ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org