Thanks, here are the logs, there are infos about how it tried to start 
resources on the nodes.
Keep in mind the node1 was already running the resources, and I simulated a 
problem by turning down the ha interface.
 
Gabriele
 
 
Sonicle S.r.l. : http://www.sonicle.com
Music: http://www.gabrielebulfon.com
eXoplanets : https://gabrielebulfon.bandcamp.com/album/exoplanets
 




----------------------------------------------------------------------------------

Da: Ulrich Windl <ulrich.wi...@rz.uni-regensburg.de>
A: users@clusterlabs.org 
Data: 16 dicembre 2020 15.45.36 CET
Oggetto: [ClusterLabs] Antw: [EXT] delaying start of a resource


>>> Gabriele Bulfon <gbul...@sonicle.com> schrieb am 16.12.2020 um 15:32 in
Nachricht <1523391015.734.1608129155836@www>:
> Hi, I have now a two node cluster using stonith with different 
> pcmk_delay_base, so that node 1 has priority to stonith node 2 in case of 
> problems.
> 
> Though, there is still one problem: once node 2 delays its stonith action 
> for 10 seconds, and node 1 just 1, node 2 does not delay start of resources, 
> so it happens that while it's not yet powered off by node 1 (and waiting its 
> dalay to power off node 1) it actually starts resources, causing a moment of 
> few seconds where both NFS IP and ZFS pool (!!!!!) is mounted by both!

AFAIK pacemaker will not start resources on a node that is scheduled for 
stonith. Even more: Pacemaker will tra to stop resources on a node scheduled 
for stonith to start them elsewhere.

> How can I delay node 2 resource start until the delayed stonith action is 
> done? Or how can I just delay the resource start so I can make it larger than 
> its pcmk_delay_base?

We probably need to see logs and configs to understand.

> 
> Also, I was suggested to set "stonith-enabled=true", but I don't know where 
> to set this flag (cib-bootstrap-options is not happy with it...).

I think it's on by default, so you must have set it to false.
In crm shell it is "configure# property stonith-enabled=...".

Regards,
Ulrich


_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Dec 16 15:08:54 [642] xstorage2 corosync notice  [TOTEM ] A processor failed, 
forming new configuration.
Dec 16 15:08:56 [642] xstorage2 corosync notice  [TOTEM ] A new membership 
(10.100.100.2:408) was formed. Members left: 1
Dec 16 15:08:56 [642] xstorage2 corosync notice  [TOTEM ] Failed to receive the 
leave message. failed: 1
Dec 16 15:08:56 [666]      attrd:     info: pcmk_cpg_membership:        Group 
attrd event 2: xstha1 (node 1 pid 710) left via cluster exit
Dec 16 15:08:56 [663]        cib:     info: pcmk_cpg_membership:        Group 
cib event 2: xstha1 (node 1 pid 707) left via cluster exit
Dec 16 15:08:56 [662] pacemakerd:     info: pcmk_cpg_membership:        Group 
pacemakerd event 2: xstha1 (node 1 pid 687) left via cluster exit
Dec 16 15:08:56 [642] xstorage2 corosync notice  [QUORUM] Members[1]: 2
Dec 16 15:08:56 [662] pacemakerd:     info: crm_update_peer_proc:       
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 16 15:08:56 [666]      attrd:     info: crm_update_peer_proc:       
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 16 15:08:56 [662] pacemakerd:     info: pcmk_cpg_membership:        Group 
pacemakerd event 2: xstha2 (node 2 pid 662) is member
Dec 16 15:08:56 [642] xstorage2 corosync notice  [MAIN  ] Completed service 
synchronization, ready to provide service.
Dec 16 15:08:56 [668]       crmd:     info: pcmk_cpg_membership:        Group 
crmd event 2: xstha1 (node 1 pid 712) left via cluster exit
Dec 16 15:08:56 [664] stonith-ng:     info: pcmk_cpg_membership:        Group 
stonith-ng event 2: xstha1 (node 1 pid 708) left via cluster exit
Dec 16 15:08:56 [663]        cib:     info: crm_update_peer_proc:       
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 16 15:08:56 [668]       crmd:     info: crm_update_peer_proc:       
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 16 15:08:56 [666]      attrd:   notice: attrd_remove_voter: Lost attribute 
writer xstha1
Dec 16 15:08:56 [664] stonith-ng:     info: crm_update_peer_proc:       
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 16 15:08:56 [662] pacemakerd:     info: pcmk_quorum_notification:   Quorum 
retained | membership=408 members=1
Dec 16 15:08:56 [663]        cib:   notice: crm_update_peer_state_iter: Node 
xstha1 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
Dec 16 15:08:56 [664] stonith-ng:   notice: crm_update_peer_state_iter: Node 
xstha1 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
Dec 16 15:08:56 [662] pacemakerd:   notice: crm_update_peer_state_iter: Node 
xstha1 state is now lost | nodeid=1 previous=member source=crm_reap_unseen_nodes
Dec 16 15:08:56 [668]       crmd:     info: peer_update_callback:       Client 
xstha1/peer now has status [offline] (DC=xstha1, changed=4000000)
Dec 16 15:08:56 [663]        cib:     info: crm_reap_dead_member:       
Removing node with name xstha1 and id 1 from membership cache
Dec 16 15:08:56 [666]      attrd:     info: attrd_start_election_if_needed:     
Starting an election to determine the writer
Dec 16 15:08:56 [663]        cib:   notice: reap_crm_member:    Purged 1 peer 
with id=1 and/or uname=xstha1 from the membership cache
Dec 16 15:08:56 [668]       crmd:   notice: peer_update_callback:       Our 
peer on the DC (xstha1) is dead
Dec 16 15:08:56 [663]        cib:     info: pcmk_cpg_membership:        Group 
cib event 2: xstha2 (node 2 pid 663) is member
Dec 16 15:08:56 [664] stonith-ng:     info: crm_reap_dead_member:       
Removing node with name xstha1 and id 1 from membership cache
Dec 16 15:08:56 [662] pacemakerd:     info: mcp_cpg_deliver:    Ignoring 
process list sent by peer for local node
Dec 16 15:08:56 [666]      attrd:   notice: crm_update_peer_state_iter: Node 
xstha1 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
Dec 16 15:08:56 [664] stonith-ng:   notice: reap_crm_member:    Purged 1 peer 
with id=1 and/or uname=xstha1 from the membership cache
Dec 16 15:08:56 [668]       crmd:     info: controld_delete_node_state: 
Deleting transient attributes for node xstha1 (via CIB call 18) | 
xpath=//node_state[@uname='xstha1']/transient_attributes
Dec 16 15:08:56 [664] stonith-ng:     info: pcmk_cpg_membership:        Group 
stonith-ng event 2: xstha2 (node 2 pid 664) is member
Dec 16 15:08:56 [666]      attrd:   notice: attrd_peer_remove:  Removing all 
xstha1 attributes for peer loss
Dec 16 15:08:56 [668]       crmd:     info: pcmk_cpg_membership:        Group 
crmd event 2: xstha2 (node 2 pid 668) is member
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_delete operation for section 
//node_state[@uname='xstha1']/transient_attributes to all (origin=local/crmd/18)
Dec 16 15:08:56 [666]      attrd:     info: crm_reap_dead_member:       
Removing node with name xstha1 and id 1 from membership cache
Dec 16 15:08:56 [668]       crmd:   notice: do_state_transition:        State 
transition S_NOT_DC -> S_ELECTION | input=I_ELECTION 
cause=C_CRMD_STATUS_CALLBACK origin=peer_update_callback
Dec 16 15:08:56 [666]      attrd:   notice: reap_crm_member:    Purged 1 peer 
with id=1 and/or uname=xstha1 from the membership cache
Dec 16 15:08:56 [668]       crmd:     info: update_dc:  Unset DC. Was xstha1
Dec 16 15:08:56 [666]      attrd:     info: pcmk_cpg_membership:        Group 
attrd event 2: xstha2 (node 2 pid 666) is member
Dec 16 15:08:56 [666]      attrd:     info: election_check:     election-attrd 
won by local node
Dec 16 15:08:56 [668]       crmd:     info: pcmk_quorum_notification:   Quorum 
retained | membership=408 members=1
Dec 16 15:08:56 [666]      attrd:   notice: attrd_declare_winner:       
Recorded local node as attribute writer (was unset)
Dec 16 15:08:56 [668]       crmd:   notice: crm_update_peer_state_iter: Node 
xstha1 state is now lost | nodeid=1 previous=member source=crm_reap_unseen_nodes
Dec 16 15:08:56 [668]       crmd:     info: peer_update_callback:       Cluster 
node xstha1 is now lost (was member)
Dec 16 15:08:56 [666]      attrd:     info: write_attribute:    Processed 1 
private change for #attrd-protocol, id=n/a, set=n/a
Dec 16 15:08:56 [668]       crmd:     info: election_check:     election-DC won 
by local node
Dec 16 15:08:56 [668]       crmd:     info: do_log:     Input I_ELECTION_DC 
received in state S_ELECTION from election_win_cb
Dec 16 15:08:56 [668]       crmd:   notice: do_state_transition:        State 
transition S_ELECTION -> S_INTEGRATION | input=I_ELECTION_DC 
cause=C_FSA_INTERNAL origin=election_win_cb
Dec 16 15:08:56 [668]       crmd:     info: do_te_control:      Registering TE 
UUID: f340fcfc-17fa-ebf0-c5bf-8299546d41b6
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_delete operation for section 
//node_state[@uname='xstha1']/transient_attributes: OK (rc=0, 
origin=xstha2/crmd/18, version=0.46.19)
Dec 16 15:08:56 [668]       crmd:     info: set_graph_functions:        Setting 
custom graph functions
Dec 16 15:08:56 [668]       crmd:     info: do_dc_takeover:     Taking over DC 
status for this partition
Dec 16 15:08:56 [663]        cib:     info: cib_process_readwrite:      We are 
now in R/W mode
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_master operation for section 'all': OK (rc=0, 
origin=local/crmd/19, version=0.46.19)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section cib to all (origin=local/crmd/20)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_modify operation for section cib: OK (rc=0, 
origin=xstha2/crmd/20, version=0.46.19)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section crm_config to all 
(origin=local/crmd/22)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_modify operation for section crm_config: OK (rc=0, 
origin=xstha2/crmd/22, version=0.46.19)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section crm_config to all 
(origin=local/crmd/24)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_modify operation for section crm_config: OK (rc=0, 
origin=xstha2/crmd/24, version=0.46.19)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section crm_config to all 
(origin=local/crmd/26)
Dec 16 15:08:56 [668]       crmd:     info: corosync_cluster_name:      Cannot 
get totem.cluster_name: CS_ERR_NOT_EXIST  (12)
Dec 16 15:08:56 [668]       crmd:     info: join_make_offer:    Making join-1 
offers based on membership event 408
Dec 16 15:08:56 [668]       crmd:     info: join_make_offer:    Sending join-1 
offer to xstha2
Dec 16 15:08:56 [668]       crmd:     info: join_make_offer:    Not making 
join-1 offer to inactive node xstha1
Dec 16 15:08:56 [668]       crmd:     info: do_dc_join_offer_all:       Waiting 
on join-1 requests from 1 outstanding node
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_modify operation for section crm_config: OK (rc=0, 
origin=xstha2/crmd/26, version=0.46.19)
Dec 16 15:08:56 [668]       crmd:     info: update_dc:  Set DC to xstha2 
(3.0.14)
Dec 16 15:08:56 [668]       crmd:     info: crm_update_peer_expected:   
update_dc: Node xstha2[2] - expected state is now member (was (null))
Dec 16 15:08:56 [668]       crmd:     info: do_state_transition:        State 
transition S_INTEGRATION -> S_FINALIZE_JOIN | input=I_INTEGRATED 
cause=C_FSA_INTERNAL origin=check_join_state
Dec 16 15:08:56 [663]        cib:     info: cib_process_replace:        Digest 
matched on replace from xstha2: 4835352cb7b4920917d8beee219bc962
Dec 16 15:08:56 [663]        cib:     info: cib_process_replace:        
Replaced 0.46.19 with 0.46.19 from xstha2
Dec 16 15:08:56 [668]       crmd:     info: controld_delete_node_state: 
Deleting resource history for node xstha2 (via CIB call 31) | 
xpath=//node_state[@uname='xstha2']/lrm
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_replace operation for section 'all': OK (rc=0, 
origin=xstha2/crmd/29, version=0.46.19)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section nodes to all (origin=local/crmd/30)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_delete operation for section //node_state[@uname='xstha2']/lrm 
to all (origin=local/crmd/31)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section status to all (origin=local/crmd/32)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_modify operation for section nodes: OK (rc=0, 
origin=xstha2/crmd/30, version=0.46.19)
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     Diff: --- 
0.46.19 2
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     Diff: +++ 
0.46.20 (null)
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     -- 
/cib/status/node_state[@id='2']/lrm[@id='2']
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     +  /cib:  
@num_updates=20
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_delete operation for section //node_state[@uname='xstha2']/lrm: 
OK (rc=0, origin=xstha2/crmd/31, version=0.46.20)
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     Diff: --- 
0.46.20 2
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     Diff: +++ 
0.46.21 (null)
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     +  /cib:  
@num_updates=21
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     +  
/cib/status/node_state[@id='2']:  @crm-debug-origin=do_lrm_query_internal
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++ 
/cib/status/node_state[@id='2']:  <lrm id="2"/>
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                       <lrm_resources>
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         <lrm_resource id="zpool_data" type="ZFS" class="ocf" 
provider="heartbeat">
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                           <lrm_rsc_op id="zpool_data_last_0" 
operation_key="zpool_data_monitor_0" operation="monitor" 
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" 
transition-key="3:3:7:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
transition-magic="0:7;3:3:7:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
exit-reason="" on_node="xstha2" call-id="13" rc-code="7" op-status="0" 
interval="0" last-run="1608127496" last-rc-change="1608127
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         </lrm_resource>
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         <lrm_resource id="xstha1-stonith" type="external/ipmi" 
class="stonith">
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                           <lrm_rsc_op id="xstha1-stonith_last_0" 
operation_key="xstha1-stonith_start_0" operation="start" 
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" 
transition-key="9:3:0:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
transition-magic="0:0;9:3:0:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
exit-reason="" on_node="xstha2" call-id="22" rc-code="0" op-status="0" 
interval="0" last-run="1608127496" last-rc-change="160
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                           <lrm_rsc_op id="xstha1-stonith_monitor_25000" 
operation_key="xstha1-stonith_monitor_25000" operation="monitor" 
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" 
transition-key="10:3:0:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
transition-magic="0:0;10:3:0:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
exit-reason="" on_node="xstha2" call-id="24" rc-code="0" op-status="0" 
interval="25000" last-rc-change="1608
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         </lrm_resource>
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         <lrm_resource id="xstha2-stonith" type="external/ipmi" 
class="stonith">
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                           <lrm_rsc_op id="xstha2-stonith_last_0" 
operation_key="xstha2-stonith_monitor_0" operation="monitor" 
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" 
transition-key="5:3:7:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
transition-magic="0:7;5:3:7:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
exit-reason="" on_node="xstha2" call-id="21" rc-code="7" op-status="0" 
interval="0" last-run="1608127496" last-rc-change=
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         </lrm_resource>
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         <lrm_resource id="xstha1_san0_IP" type="IPaddr" 
class="ocf" provider="heartbeat">
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                           <lrm_rsc_op id="xstha1_san0_IP_last_0" 
operation_key="xstha1_san0_IP_monitor_0" operation="monitor" 
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" 
transition-key="1:3:7:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
transition-magic="0:7;1:3:7:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
exit-reason="" on_node="xstha2" call-id="5" rc-code="7" op-status="0" 
interval="0" last-run="1608127496" last-rc-change="
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         </lrm_resource>
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         <lrm_resource id="xstha2_san0_IP" type="IPaddr" 
class="ocf" provider="heartbeat">
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                           <lrm_rsc_op id="xstha2_san0_IP_last_0" 
operation_key="xstha2_san0_IP_start_0" operation="start" 
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" 
transition-key="7:3:0:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
transition-magic="0:0;7:3:0:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
exit-reason="" on_node="xstha2" call-id="23" rc-code="0" op-status="0" 
interval="0" last-run="1608127497" last-rc-change="160
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         </lrm_resource>
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                       </lrm_resources>
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                     </lrm>
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_modify operation for section status: OK (rc=0, 
origin=xstha2/crmd/32, version=0.46.21)
Dec 16 15:08:56 [668]       crmd:     info: do_state_transition:        State 
transition S_FINALIZE_JOIN -> S_POLICY_ENGINE | input=I_FINALIZED 
cause=C_FSA_INTERNAL origin=check_join_state
Dec 16 15:08:56 [668]       crmd:     info: abort_transition_graph:     
Transition aborted: Peer Cancelled | source=do_te_invoke:143 complete=true
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section nodes to all (origin=local/crmd/35)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section status to all (origin=local/crmd/36)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section cib to all (origin=local/crmd/37)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_modify operation for section nodes: OK (rc=0, 
origin=xstha2/crmd/35, version=0.46.21)
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     Diff: --- 
0.46.21 2
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     Diff: +++ 
0.46.22 (null)
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     +  /cib:  
@num_updates=22
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     +  
/cib/status/node_state[@id='1']:  @in_ccm=false, @crmd=offline, 
@crm-debug-origin=do_state_transition, @join=down
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     +  
/cib/status/node_state[@id='2']:  @crm-debug-origin=do_state_transition
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_modify operation for section status: OK (rc=0, 
origin=xstha2/crmd/36, version=0.46.22)
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     Diff: --- 
0.46.22 2
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     Diff: +++ 
0.46.23 (null)
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     +  /cib:  
@num_updates=23, @dc-uuid=2
Dec 16 15:08:56 [663]        cib:     info: cib_file_backup:    Archived 
previous version as /sonicle/var/cluster/lib/pacemaker/cib/cib-8.raw
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_modify operation for section cib: OK (rc=0, 
origin=xstha2/crmd/37, version=0.46.23)
Dec 16 15:08:56 [663]        cib:     info: cib_file_write_with_digest: Wrote 
version 0.46.0 of the CIB to disk (digest: 1ea3e3ee6c388f74623494869acf32d0)
Dec 16 15:08:56 [663]        cib:     info: cib_file_write_with_digest: Reading 
cluster configuration file /sonicle/var/cluster/lib/pacemaker/cib/cib.4LaWbc 
(digest: /sonicle/var/cluster/lib/pacemaker/cib/cib.5LaWbc)
Dec 16 15:08:56 [667]    pengine:  warning: unpack_config:      Support for 
stonith-action of 'poweroff' is deprecated and will be removed in a future 
release (use 'off' instead)
Dec 16 15:08:56 [667]    pengine:  warning: pe_fence_node:      Cluster node 
xstha1 will be fenced: peer is no longer part of the cluster
Dec 16 15:08:56 [667]    pengine:  warning: determine_online_status:    Node 
xstha1 is unclean
Dec 16 15:08:56 [667]    pengine:     info: determine_online_status_fencing:    
Node xstha2 is active
Dec 16 15:08:56 [667]    pengine:     info: determine_online_status:    Node 
xstha2 is online
Dec 16 15:08:56 [667]    pengine:     info: unpack_node_loop:   Node 1 is 
already processed
Dec 16 15:08:56 [667]    pengine:     info: unpack_node_loop:   Node 2 is 
already processed
Dec 16 15:08:56 [667]    pengine:     info: unpack_node_loop:   Node 1 is 
already processed
Dec 16 15:08:56 [667]    pengine:     info: unpack_node_loop:   Node 2 is 
already processed
Dec 16 15:08:56 [667]    pengine:     info: common_print:       xstha1_san0_IP  
(ocf::heartbeat:IPaddr):        Started xstha1 (UNCLEAN)
Dec 16 15:08:56 [667]    pengine:     info: common_print:       xstha2_san0_IP  
(ocf::heartbeat:IPaddr):        Started xstha2
Dec 16 15:08:56 [667]    pengine:     info: common_print:       zpool_data      
(ocf::heartbeat:ZFS):   Started xstha1 (UNCLEAN)
Dec 16 15:08:56 [667]    pengine:     info: common_print:       xstha1-stonith  
(stonith:external/ipmi):        Started xstha2
Dec 16 15:08:56 [667]    pengine:     info: common_print:       xstha2-stonith  
(stonith:external/ipmi):        Started xstha1 (UNCLEAN)
Dec 16 15:08:56 [667]    pengine:     info: pcmk__native_allocate:      
Resource xstha2-stonith cannot run anywhere
Dec 16 15:08:56 [667]    pengine:  warning: custom_action:      Action 
xstha1_san0_IP_stop_0 on xstha1 is unrunnable (offline)
Dec 16 15:08:56 [667]    pengine:  warning: custom_action:      Action 
zpool_data_stop_0 on xstha1 is unrunnable (offline)
Dec 16 15:08:56 [667]    pengine:  warning: custom_action:      Action 
xstha2-stonith_stop_0 on xstha1 is unrunnable (offline)
Dec 16 15:08:56 [667]    pengine:  warning: custom_action:      Action 
xstha2-stonith_stop_0 on xstha1 is unrunnable (offline)
Dec 16 15:08:56 [667]    pengine:  warning: stage6:     Scheduling Node xstha1 
for STONITH
Dec 16 15:08:56 [667]    pengine:     info: native_stop_constraints:    
xstha1_san0_IP_stop_0 is implicit after xstha1 is fenced
Dec 16 15:08:56 [667]    pengine:     info: native_stop_constraints:    
zpool_data_stop_0 is implicit after xstha1 is fenced
Dec 16 15:08:56 [667]    pengine:     info: native_stop_constraints:    
xstha2-stonith_stop_0 is implicit after xstha1 is fenced
Dec 16 15:08:56 [667]    pengine:   notice: LogNodeActions:      * Fence (off) 
xstha1 'peer is no longer part of the cluster'
Dec 16 15:08:56 [667]    pengine:   notice: LogAction:   * Move       
xstha1_san0_IP     ( xstha1 -> xstha2 )
Dec 16 15:08:56 [667]    pengine:     info: LogActions: Leave   xstha2_san0_IP  
(Started xstha2)
Dec 16 15:08:56 [667]    pengine:   notice: LogAction:   * Move       
zpool_data         ( xstha1 -> xstha2 )
Dec 16 15:08:56 [667]    pengine:     info: LogActions: Leave   xstha1-stonith  
(Started xstha2)
Dec 16 15:08:56 [667]    pengine:   notice: LogAction:   * Stop       
xstha2-stonith     (           xstha1 )   due to node availability
Dec 16 15:08:56 [667]    pengine:  warning: process_pe_message: Calculated 
transition 0 (with warnings), saving inputs in 
/sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-6.bz2
Dec 16 15:08:56 [668]       crmd:     info: do_state_transition:        State 
transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS 
cause=C_IPC_MESSAGE origin=handle_response
Dec 16 15:08:56 [668]       crmd:     info: do_te_invoke:       Processing 
graph 0 (ref=pe_calc-dc-1608127736-14) derived from 
/sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-6.bz2
Dec 16 15:08:56 [668]       crmd:   notice: te_fence_node:      Requesting 
fencing (off) of node xstha1 | action=1 timeout=60000
Dec 16 15:08:56 [664] stonith-ng:   notice: handle_request:     Client 
crmd.668.c46cefe4 wants to fence (off) 'xstha1' with device '(any)'
Dec 16 15:08:56 [664] stonith-ng:   notice: initiate_remote_stonith_op: 
Requesting peer fencing (off) targeting xstha1 | 
id=3cdbf44e-e860-c100-95e0-db72cc63ae16 state=0
Dec 16 15:08:56 [664] stonith-ng:     info: dynamic_list_search_cb:     
Refreshing port list for xstha1-stonith
Dec 16 15:08:56 [664] stonith-ng:     info: process_remote_stonith_query:       
Query result 1 of 1 from xstha2 for xstha1/off (1 devices) 
3cdbf44e-e860-c100-95e0-db72cc63ae16
Dec 16 15:08:56 [664] stonith-ng:     info: call_remote_stonith:        Total 
timeout set to 60 for peer's fencing targeting xstha1 for 
crmd.668|id=3cdbf44e-e860-c100-95e0-db72cc63ae16
Dec 16 15:08:56 [664] stonith-ng:   notice: call_remote_stonith:        
Requesting that xstha2 perform 'off' action targeting xstha1 | for client 
crmd.668 (72s, 0s)
Dec 16 15:08:56 [664] stonith-ng:   notice: can_fence_host_with_device: 
xstha1-stonith can fence (off) xstha1: dynamic-list
Dec 16 15:08:56 [664] stonith-ng:     info: stonith_fence_get_devices_cb:       
Found 1 matching devices for 'xstha1'
Dec 16 15:08:56 [664] stonith-ng:   notice: schedule_stonith_command:   
Delaying 'off' action targeting xstha1 on xstha1-stonith for 10s (timeout=60s, 
requested_delay=0s, base=10s, max=10s)
Dec 16 15:08:54 [642] xstorage2 corosync notice  [TOTEM ] A processor failed, 
forming new configuration.
Dec 16 15:08:56 [642] xstorage2 corosync notice  [TOTEM ] A new membership 
(10.100.100.2:408) was formed. Members left: 1
Dec 16 15:08:56 [642] xstorage2 corosync notice  [TOTEM ] Failed to receive the 
leave message. failed: 1
Dec 16 15:08:56 [666]      attrd:     info: pcmk_cpg_membership:        Group 
attrd event 2: xstha1 (node 1 pid 710) left via cluster exit
Dec 16 15:08:56 [663]        cib:     info: pcmk_cpg_membership:        Group 
cib event 2: xstha1 (node 1 pid 707) left via cluster exit
Dec 16 15:08:56 [662] pacemakerd:     info: pcmk_cpg_membership:        Group 
pacemakerd event 2: xstha1 (node 1 pid 687) left via cluster exit
Dec 16 15:08:56 [642] xstorage2 corosync notice  [QUORUM] Members[1]: 2
Dec 16 15:08:56 [662] pacemakerd:     info: crm_update_peer_proc:       
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 16 15:08:56 [666]      attrd:     info: crm_update_peer_proc:       
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 16 15:08:56 [662] pacemakerd:     info: pcmk_cpg_membership:        Group 
pacemakerd event 2: xstha2 (node 2 pid 662) is member
Dec 16 15:08:56 [642] xstorage2 corosync notice  [MAIN  ] Completed service 
synchronization, ready to provide service.
Dec 16 15:08:56 [668]       crmd:     info: pcmk_cpg_membership:        Group 
crmd event 2: xstha1 (node 1 pid 712) left via cluster exit
Dec 16 15:08:56 [664] stonith-ng:     info: pcmk_cpg_membership:        Group 
stonith-ng event 2: xstha1 (node 1 pid 708) left via cluster exit
Dec 16 15:08:56 [663]        cib:     info: crm_update_peer_proc:       
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 16 15:08:56 [668]       crmd:     info: crm_update_peer_proc:       
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 16 15:08:56 [666]      attrd:   notice: attrd_remove_voter: Lost attribute 
writer xstha1
Dec 16 15:08:56 [664] stonith-ng:     info: crm_update_peer_proc:       
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 16 15:08:56 [662] pacemakerd:     info: pcmk_quorum_notification:   Quorum 
retained | membership=408 members=1
Dec 16 15:08:56 [663]        cib:   notice: crm_update_peer_state_iter: Node 
xstha1 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
Dec 16 15:08:56 [664] stonith-ng:   notice: crm_update_peer_state_iter: Node 
xstha1 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
Dec 16 15:08:56 [662] pacemakerd:   notice: crm_update_peer_state_iter: Node 
xstha1 state is now lost | nodeid=1 previous=member source=crm_reap_unseen_nodes
Dec 16 15:08:56 [668]       crmd:     info: peer_update_callback:       Client 
xstha1/peer now has status [offline] (DC=xstha1, changed=4000000)
Dec 16 15:08:56 [663]        cib:     info: crm_reap_dead_member:       
Removing node with name xstha1 and id 1 from membership cache
Dec 16 15:08:56 [666]      attrd:     info: attrd_start_election_if_needed:     
Starting an election to determine the writer
Dec 16 15:08:56 [663]        cib:   notice: reap_crm_member:    Purged 1 peer 
with id=1 and/or uname=xstha1 from the membership cache
Dec 16 15:08:56 [668]       crmd:   notice: peer_update_callback:       Our 
peer on the DC (xstha1) is dead
Dec 16 15:08:56 [663]        cib:     info: pcmk_cpg_membership:        Group 
cib event 2: xstha2 (node 2 pid 663) is member
Dec 16 15:08:56 [664] stonith-ng:     info: crm_reap_dead_member:       
Removing node with name xstha1 and id 1 from membership cache
Dec 16 15:08:56 [662] pacemakerd:     info: mcp_cpg_deliver:    Ignoring 
process list sent by peer for local node
Dec 16 15:08:56 [666]      attrd:   notice: crm_update_peer_state_iter: Node 
xstha1 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
Dec 16 15:08:56 [664] stonith-ng:   notice: reap_crm_member:    Purged 1 peer 
with id=1 and/or uname=xstha1 from the membership cache
Dec 16 15:08:56 [668]       crmd:     info: controld_delete_node_state: 
Deleting transient attributes for node xstha1 (via CIB call 18) | 
xpath=//node_state[@uname='xstha1']/transient_attributes
Dec 16 15:08:56 [664] stonith-ng:     info: pcmk_cpg_membership:        Group 
stonith-ng event 2: xstha2 (node 2 pid 664) is member
Dec 16 15:08:56 [666]      attrd:   notice: attrd_peer_remove:  Removing all 
xstha1 attributes for peer loss
Dec 16 15:08:56 [668]       crmd:     info: pcmk_cpg_membership:        Group 
crmd event 2: xstha2 (node 2 pid 668) is member
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_delete operation for section 
//node_state[@uname='xstha1']/transient_attributes to all (origin=local/crmd/18)
Dec 16 15:08:56 [666]      attrd:     info: crm_reap_dead_member:       
Removing node with name xstha1 and id 1 from membership cache
Dec 16 15:08:56 [668]       crmd:   notice: do_state_transition:        State 
transition S_NOT_DC -> S_ELECTION | input=I_ELECTION 
cause=C_CRMD_STATUS_CALLBACK origin=peer_update_callback
Dec 16 15:08:56 [666]      attrd:   notice: reap_crm_member:    Purged 1 peer 
with id=1 and/or uname=xstha1 from the membership cache
Dec 16 15:08:56 [668]       crmd:     info: update_dc:  Unset DC. Was xstha1
Dec 16 15:08:56 [666]      attrd:     info: pcmk_cpg_membership:        Group 
attrd event 2: xstha2 (node 2 pid 666) is member
Dec 16 15:08:56 [666]      attrd:     info: election_check:     election-attrd 
won by local node
Dec 16 15:08:56 [668]       crmd:     info: pcmk_quorum_notification:   Quorum 
retained | membership=408 members=1
Dec 16 15:08:56 [666]      attrd:   notice: attrd_declare_winner:       
Recorded local node as attribute writer (was unset)
Dec 16 15:08:56 [668]       crmd:   notice: crm_update_peer_state_iter: Node 
xstha1 state is now lost | nodeid=1 previous=member source=crm_reap_unseen_nodes
Dec 16 15:08:56 [668]       crmd:     info: peer_update_callback:       Cluster 
node xstha1 is now lost (was member)
Dec 16 15:08:56 [666]      attrd:     info: write_attribute:    Processed 1 
private change for #attrd-protocol, id=n/a, set=n/a
Dec 16 15:08:56 [668]       crmd:     info: election_check:     election-DC won 
by local node
Dec 16 15:08:56 [668]       crmd:     info: do_log:     Input I_ELECTION_DC 
received in state S_ELECTION from election_win_cb
Dec 16 15:08:56 [668]       crmd:   notice: do_state_transition:        State 
transition S_ELECTION -> S_INTEGRATION | input=I_ELECTION_DC 
cause=C_FSA_INTERNAL origin=election_win_cb
Dec 16 15:08:56 [668]       crmd:     info: do_te_control:      Registering TE 
UUID: f340fcfc-17fa-ebf0-c5bf-8299546d41b6
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_delete operation for section 
//node_state[@uname='xstha1']/transient_attributes: OK (rc=0, 
origin=xstha2/crmd/18, version=0.46.19)
Dec 16 15:08:56 [668]       crmd:     info: set_graph_functions:        Setting 
custom graph functions
Dec 16 15:08:56 [668]       crmd:     info: do_dc_takeover:     Taking over DC 
status for this partition
Dec 16 15:08:56 [663]        cib:     info: cib_process_readwrite:      We are 
now in R/W mode
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_master operation for section 'all': OK (rc=0, 
origin=local/crmd/19, version=0.46.19)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section cib to all (origin=local/crmd/20)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_modify operation for section cib: OK (rc=0, 
origin=xstha2/crmd/20, version=0.46.19)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section crm_config to all 
(origin=local/crmd/22)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_modify operation for section crm_config: OK (rc=0, 
origin=xstha2/crmd/22, version=0.46.19)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section crm_config to all 
(origin=local/crmd/24)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_modify operation for section crm_config: OK (rc=0, 
origin=xstha2/crmd/24, version=0.46.19)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section crm_config to all 
(origin=local/crmd/26)
Dec 16 15:08:56 [668]       crmd:     info: corosync_cluster_name:      Cannot 
get totem.cluster_name: CS_ERR_NOT_EXIST  (12)
Dec 16 15:08:56 [668]       crmd:     info: join_make_offer:    Making join-1 
offers based on membership event 408
Dec 16 15:08:56 [668]       crmd:     info: join_make_offer:    Sending join-1 
offer to xstha2
Dec 16 15:08:56 [668]       crmd:     info: join_make_offer:    Not making 
join-1 offer to inactive node xstha1
Dec 16 15:08:56 [668]       crmd:     info: do_dc_join_offer_all:       Waiting 
on join-1 requests from 1 outstanding node
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_modify operation for section crm_config: OK (rc=0, 
origin=xstha2/crmd/26, version=0.46.19)
Dec 16 15:08:56 [668]       crmd:     info: update_dc:  Set DC to xstha2 
(3.0.14)
Dec 16 15:08:56 [668]       crmd:     info: crm_update_peer_expected:   
update_dc: Node xstha2[2] - expected state is now member (was (null))
Dec 16 15:08:56 [668]       crmd:     info: do_state_transition:        State 
transition S_INTEGRATION -> S_FINALIZE_JOIN | input=I_INTEGRATED 
cause=C_FSA_INTERNAL origin=check_join_state
Dec 16 15:08:56 [663]        cib:     info: cib_process_replace:        Digest 
matched on replace from xstha2: 4835352cb7b4920917d8beee219bc962
Dec 16 15:08:56 [663]        cib:     info: cib_process_replace:        
Replaced 0.46.19 with 0.46.19 from xstha2
Dec 16 15:08:56 [668]       crmd:     info: controld_delete_node_state: 
Deleting resource history for node xstha2 (via CIB call 31) | 
xpath=//node_state[@uname='xstha2']/lrm
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_replace operation for section 'all': OK (rc=0, 
origin=xstha2/crmd/29, version=0.46.19)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section nodes to all (origin=local/crmd/30)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_delete operation for section //node_state[@uname='xstha2']/lrm 
to all (origin=local/crmd/31)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section status to all (origin=local/crmd/32)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_modify operation for section nodes: OK (rc=0, 
origin=xstha2/crmd/30, version=0.46.19)
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     Diff: --- 
0.46.19 2
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     Diff: +++ 
0.46.20 (null)
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     -- 
/cib/status/node_state[@id='2']/lrm[@id='2']
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     +  /cib:  
@num_updates=20
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_delete operation for section //node_state[@uname='xstha2']/lrm: 
OK (rc=0, origin=xstha2/crmd/31, version=0.46.20)
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     Diff: --- 
0.46.20 2
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     Diff: +++ 
0.46.21 (null)
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     +  /cib:  
@num_updates=21
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     +  
/cib/status/node_state[@id='2']:  @crm-debug-origin=do_lrm_query_internal
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++ 
/cib/status/node_state[@id='2']:  <lrm id="2"/>
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                       <lrm_resources>
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         <lrm_resource id="zpool_data" type="ZFS" class="ocf" 
provider="heartbeat">
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                           <lrm_rsc_op id="zpool_data_last_0" 
operation_key="zpool_data_monitor_0" operation="monitor" 
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" 
transition-key="3:3:7:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
transition-magic="0:7;3:3:7:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
exit-reason="" on_node="xstha2" call-id="13" rc-code="7" op-status="0" 
interval="0" last-run="1608127496" last-rc-change="1608127
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         </lrm_resource>
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         <lrm_resource id="xstha1-stonith" type="external/ipmi" 
class="stonith">
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                           <lrm_rsc_op id="xstha1-stonith_last_0" 
operation_key="xstha1-stonith_start_0" operation="start" 
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" 
transition-key="9:3:0:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
transition-magic="0:0;9:3:0:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
exit-reason="" on_node="xstha2" call-id="22" rc-code="0" op-status="0" 
interval="0" last-run="1608127496" last-rc-change="160
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                           <lrm_rsc_op id="xstha1-stonith_monitor_25000" 
operation_key="xstha1-stonith_monitor_25000" operation="monitor" 
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" 
transition-key="10:3:0:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
transition-magic="0:0;10:3:0:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
exit-reason="" on_node="xstha2" call-id="24" rc-code="0" op-status="0" 
interval="25000" last-rc-change="1608
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         </lrm_resource>
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         <lrm_resource id="xstha2-stonith" type="external/ipmi" 
class="stonith">
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                           <lrm_rsc_op id="xstha2-stonith_last_0" 
operation_key="xstha2-stonith_monitor_0" operation="monitor" 
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" 
transition-key="5:3:7:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
transition-magic="0:7;5:3:7:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
exit-reason="" on_node="xstha2" call-id="21" rc-code="7" op-status="0" 
interval="0" last-run="1608127496" last-rc-change=
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         </lrm_resource>
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         <lrm_resource id="xstha1_san0_IP" type="IPaddr" 
class="ocf" provider="heartbeat">
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                           <lrm_rsc_op id="xstha1_san0_IP_last_0" 
operation_key="xstha1_san0_IP_monitor_0" operation="monitor" 
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" 
transition-key="1:3:7:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
transition-magic="0:7;1:3:7:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
exit-reason="" on_node="xstha2" call-id="5" rc-code="7" op-status="0" 
interval="0" last-run="1608127496" last-rc-change="
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         </lrm_resource>
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         <lrm_resource id="xstha2_san0_IP" type="IPaddr" 
class="ocf" provider="heartbeat">
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                           <lrm_rsc_op id="xstha2_san0_IP_last_0" 
operation_key="xstha2_san0_IP_start_0" operation="start" 
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14" 
transition-key="7:3:0:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
transition-magic="0:0;7:3:0:cc8faf12-ac24-cc9c-c212-effe6840ca76" 
exit-reason="" on_node="xstha2" call-id="23" rc-code="0" op-status="0" 
interval="0" last-run="1608127497" last-rc-change="160
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                         </lrm_resource>
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                       </lrm_resources>
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     ++              
                     </lrm>
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_modify operation for section status: OK (rc=0, 
origin=xstha2/crmd/32, version=0.46.21)
Dec 16 15:08:56 [668]       crmd:     info: do_state_transition:        State 
transition S_FINALIZE_JOIN -> S_POLICY_ENGINE | input=I_FINALIZED 
cause=C_FSA_INTERNAL origin=check_join_state
Dec 16 15:08:56 [668]       crmd:     info: abort_transition_graph:     
Transition aborted: Peer Cancelled | source=do_te_invoke:143 complete=true
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section nodes to all (origin=local/crmd/35)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section status to all (origin=local/crmd/36)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Forwarding cib_modify operation for section cib to all (origin=local/crmd/37)
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_modify operation for section nodes: OK (rc=0, 
origin=xstha2/crmd/35, version=0.46.21)
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     Diff: --- 
0.46.21 2
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     Diff: +++ 
0.46.22 (null)
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     +  /cib:  
@num_updates=22
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     +  
/cib/status/node_state[@id='1']:  @in_ccm=false, @crmd=offline, 
@crm-debug-origin=do_state_transition, @join=down
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     +  
/cib/status/node_state[@id='2']:  @crm-debug-origin=do_state_transition
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_modify operation for section status: OK (rc=0, 
origin=xstha2/crmd/36, version=0.46.22)
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     Diff: --- 
0.46.22 2
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     Diff: +++ 
0.46.23 (null)
Dec 16 15:08:56 [663]        cib:     info: cib_perform_op:     +  /cib:  
@num_updates=23, @dc-uuid=2
Dec 16 15:08:56 [663]        cib:     info: cib_file_backup:    Archived 
previous version as /sonicle/var/cluster/lib/pacemaker/cib/cib-8.raw
Dec 16 15:08:56 [663]        cib:     info: cib_process_request:        
Completed cib_modify operation for section cib: OK (rc=0, 
origin=xstha2/crmd/37, version=0.46.23)
Dec 16 15:08:56 [663]        cib:     info: cib_file_write_with_digest: Wrote 
version 0.46.0 of the CIB to disk (digest: 1ea3e3ee6c388f74623494869acf32d0)
Dec 16 15:08:56 [663]        cib:     info: cib_file_write_with_digest: Reading 
cluster configuration file /sonicle/var/cluster/lib/pacemaker/cib/cib.4LaWbc 
(digest: /sonicle/var/cluster/lib/pacemaker/cib/cib.5LaWbc)
Dec 16 15:08:56 [667]    pengine:  warning: unpack_config:      Support for 
stonith-action of 'poweroff' is deprecated and will be removed in a future 
release (use 'off' instead)
Dec 16 15:08:56 [667]    pengine:  warning: pe_fence_node:      Cluster node 
xstha1 will be fenced: peer is no longer part of the cluster
Dec 16 15:08:56 [667]    pengine:  warning: determine_online_status:    Node 
xstha1 is unclean
Dec 16 15:08:56 [667]    pengine:     info: determine_online_status_fencing:    
Node xstha2 is active
Dec 16 15:08:56 [667]    pengine:     info: determine_online_status:    Node 
xstha2 is online
Dec 16 15:08:56 [667]    pengine:     info: unpack_node_loop:   Node 1 is 
already processed
Dec 16 15:08:56 [667]    pengine:     info: unpack_node_loop:   Node 2 is 
already processed
Dec 16 15:08:56 [667]    pengine:     info: unpack_node_loop:   Node 1 is 
already processed
Dec 16 15:08:56 [667]    pengine:     info: unpack_node_loop:   Node 2 is 
already processed
Dec 16 15:08:56 [667]    pengine:     info: common_print:       xstha1_san0_IP  
(ocf::heartbeat:IPaddr):        Started xstha1 (UNCLEAN)
Dec 16 15:08:56 [667]    pengine:     info: common_print:       xstha2_san0_IP  
(ocf::heartbeat:IPaddr):        Started xstha2
Dec 16 15:08:56 [667]    pengine:     info: common_print:       zpool_data      
(ocf::heartbeat:ZFS):   Started xstha1 (UNCLEAN)
Dec 16 15:08:56 [667]    pengine:     info: common_print:       xstha1-stonith  
(stonith:external/ipmi):        Started xstha2
Dec 16 15:08:56 [667]    pengine:     info: common_print:       xstha2-stonith  
(stonith:external/ipmi):        Started xstha1 (UNCLEAN)
Dec 16 15:08:56 [667]    pengine:     info: pcmk__native_allocate:      
Resource xstha2-stonith cannot run anywhere
Dec 16 15:08:56 [667]    pengine:  warning: custom_action:      Action 
xstha1_san0_IP_stop_0 on xstha1 is unrunnable (offline)
Dec 16 15:08:56 [667]    pengine:  warning: custom_action:      Action 
zpool_data_stop_0 on xstha1 is unrunnable (offline)
Dec 16 15:08:56 [667]    pengine:  warning: custom_action:      Action 
xstha2-stonith_stop_0 on xstha1 is unrunnable (offline)
Dec 16 15:08:56 [667]    pengine:  warning: custom_action:      Action 
xstha2-stonith_stop_0 on xstha1 is unrunnable (offline)
Dec 16 15:08:56 [667]    pengine:  warning: stage6:     Scheduling Node xstha1 
for STONITH
Dec 16 15:08:56 [667]    pengine:     info: native_stop_constraints:    
xstha1_san0_IP_stop_0 is implicit after xstha1 is fenced
Dec 16 15:08:56 [667]    pengine:     info: native_stop_constraints:    
zpool_data_stop_0 is implicit after xstha1 is fenced
Dec 16 15:08:56 [667]    pengine:     info: native_stop_constraints:    
xstha2-stonith_stop_0 is implicit after xstha1 is fenced
Dec 16 15:08:56 [667]    pengine:   notice: LogNodeActions:      * Fence (off) 
xstha1 'peer is no longer part of the cluster'
Dec 16 15:08:56 [667]    pengine:   notice: LogAction:   * Move       
xstha1_san0_IP     ( xstha1 -> xstha2 )
Dec 16 15:08:56 [667]    pengine:     info: LogActions: Leave   xstha2_san0_IP  
(Started xstha2)
Dec 16 15:08:56 [667]    pengine:   notice: LogAction:   * Move       
zpool_data         ( xstha1 -> xstha2 )
Dec 16 15:08:56 [667]    pengine:     info: LogActions: Leave   xstha1-stonith  
(Started xstha2)
Dec 16 15:08:56 [667]    pengine:   notice: LogAction:   * Stop       
xstha2-stonith     (           xstha1 )   due to node availability
Dec 16 15:08:56 [667]    pengine:  warning: process_pe_message: Calculated 
transition 0 (with warnings), saving inputs in 
/sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-6.bz2
Dec 16 15:08:56 [668]       crmd:     info: do_state_transition:        State 
transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS 
cause=C_IPC_MESSAGE origin=handle_response
Dec 16 15:08:56 [668]       crmd:     info: do_te_invoke:       Processing 
graph 0 (ref=pe_calc-dc-1608127736-14) derived from 
/sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-6.bz2
Dec 16 15:08:56 [668]       crmd:   notice: te_fence_node:      Requesting 
fencing (off) of node xstha1 | action=1 timeout=60000
Dec 16 15:08:56 [664] stonith-ng:   notice: handle_request:     Client 
crmd.668.c46cefe4 wants to fence (off) 'xstha1' with device '(any)'
Dec 16 15:08:56 [664] stonith-ng:   notice: initiate_remote_stonith_op: 
Requesting peer fencing (off) targeting xstha1 | 
id=3cdbf44e-e860-c100-95e0-db72cc63ae16 state=0
Dec 16 15:08:56 [664] stonith-ng:     info: dynamic_list_search_cb:     
Refreshing port list for xstha1-stonith
Dec 16 15:08:56 [664] stonith-ng:     info: process_remote_stonith_query:       
Query result 1 of 1 from xstha2 for xstha1/off (1 devices) 
3cdbf44e-e860-c100-95e0-db72cc63ae16
Dec 16 15:08:56 [664] stonith-ng:     info: call_remote_stonith:        Total 
timeout set to 60 for peer's fencing targeting xstha1 for 
crmd.668|id=3cdbf44e-e860-c100-95e0-db72cc63ae16
Dec 16 15:08:56 [664] stonith-ng:   notice: call_remote_stonith:        
Requesting that xstha2 perform 'off' action targeting xstha1 | for client 
crmd.668 (72s, 0s)
Dec 16 15:08:56 [664] stonith-ng:   notice: can_fence_host_with_device: 
xstha1-stonith can fence (off) xstha1: dynamic-list
Dec 16 15:08:56 [664] stonith-ng:     info: stonith_fence_get_devices_cb:       
Found 1 matching devices for 'xstha1'
Dec 16 15:08:56 [664] stonith-ng:   notice: schedule_stonith_command:   
Delaying 'off' action targeting xstha1 on xstha1-stonith for 10s (timeout=60s, 
requested_delay=0s, base=10s, max=10s)
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to