Hi, I just updated the timeout for the stop operation on an nfs cluster and while the timeout was update the status suddenly showed this:
Failed Actions: * nfsserver_monitor_10000 on nfs1aqs1 'unknown error' (1): call=41, status=Timed Out, exitreason='none', last-rc-change='Tue Aug 13 14:14:28 2019', queued=0ms, exec=0ms The command used: pcs resource update nfsserver op stop timeout=30s I can't imagine that this is expected to happen. Is there another way to update the timeout that doesn't cause this? I attached the log of the transition. Regards, Dennis
Sep 10 09:39:29 [2378] nfs1a-qs1 cib: info: cib_process_request: Forwarding cib_replace operation for section configuration to all (origin=local/cibadmin/2) Sep 10 09:39:29 [2378] nfs1a-qs1 cib: info: cib_perform_op: Diff: --- 0.76.14 2 Sep 10 09:39:29 [2378] nfs1a-qs1 cib: info: cib_perform_op: Diff: +++ 0.77.0 8b73092b4ee9744fc4eaff60f8ba8388 Sep 10 09:39:29 [2378] nfs1a-qs1 cib: info: cib_perform_op: + /cib: @epoch=77, @num_updates=0 Sep 10 09:39:29 [2378] nfs1a-qs1 cib: info: cib_perform_op: + /cib/configuration/resources/primitive[@id='nfsserver']/operations/op[@id='nfsserver-stop-interval-0s']: @timeout=30s Sep 10 09:39:29 [2378] nfs1a-qs1 cib: info: cib_perform_op: ++ /cib/configuration/resources/primitive[@id='nfsserver']: <meta_attributes id="nfsserver-meta_attributes"/> Sep 10 09:39:29 [2378] nfs1a-qs1 cib: info: cib_process_request: Completed cib_replace operation for section configuration: OK (rc=0, origin=nfs1aqs1/cibadmin/2, version=0.77.0) Sep 10 09:39:29 [2383] nfs1a-qs1 crmd: info: abort_transition_graph: Transition aborted by op.nfsserver-stop-interval-0s 'modify': Configuration change | cib=0.77.0 source=te_update_diff:456 path=/cib/configuration/resources/primitive[@id='nfsserver']/operations/op[@id='nfsserver-stop-interval-0s'] complete=true Sep 10 09:39:29 [2383] nfs1a-qs1 crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: notice: unpack_config: On loss of CCM Quorum: Ignore Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: determine_online_status: Node nfs1bqs1 is online Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: determine_online_status: Node nfs1aqs1 is online Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: warning: unpack_rsc_op_failure: Processing failed op monitor for nfsserver on nfs1aqs1: unknown error (1) Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: unpack_node_loop: Node 2 is already processed Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: unpack_node_loop: Node 1 is already processed Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: unpack_node_loop: Node 2 is already processed Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: unpack_node_loop: Node 1 is already processed Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: clone_print: Master/Slave Set: drbd-clone [drbd] Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: short_print: Masters: [ nfs1aqs1 ] Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: short_print: Slaves: [ nfs1bqs1 ] Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: common_print: metadata-fs (ocf::heartbeat:Filesystem): Started nfs1aqs1 Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: common_print: medias-fs (ocf::heartbeat:Filesystem): Started nfs1aqs1 Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: common_print: nfsserver (ocf::heartbeat:nfsserver): Started nfs1aqs1 Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: common_print: vip (ocf::heartbeat:IPaddr2): Started nfs1aqs1 Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: get_failcount_full: nfsserver has failed 1 times on nfs1aqs1 Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: check_migration_threshold: nfsserver can fail 999999 more times on nfs1aqs1 before being forced off Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: master_color: Promoting drbd:1 (Master nfs1aqs1) Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: master_color: drbd-clone: Promoted 1 instances of a possible 1 to master Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: LogActions: Leave drbd:0 (Slave nfs1bqs1) Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: LogActions: Leave drbd:1 (Master nfs1aqs1) Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: LogActions: Leave metadata-fs (Started nfs1aqs1) Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: LogActions: Leave medias-fs (Started nfs1aqs1) Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: LogActions: Leave nfsserver (Started nfs1aqs1) Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: info: LogActions: Leave vip (Started nfs1aqs1) Sep 10 09:39:29 [2382] nfs1a-qs1 pengine: notice: process_pe_message: Calculated transition 52373, saving inputs in /var/lib/pacemaker/pengine/pe-input-121.bz2 Sep 10 09:39:29 [2383] nfs1a-qs1 crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response Sep 10 09:39:29 [2383] nfs1a-qs1 crmd: info: do_te_invoke: Processing graph 52373 (ref=pe_calc-dc-1568101169-53552) derived from /var/lib/pacemaker/pengine/pe-input-121.bz2 Sep 10 09:39:29 [2383] nfs1a-qs1 crmd: notice: run_graph: Transition 52373 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-121.bz2): Complete Sep 10 09:39:29 [2383] nfs1a-qs1 crmd: info: do_log: Input I_TE_SUCCESS received in state S_TRANSITION_ENGINE from notify_crmd Sep 10 09:39:29 [2383] nfs1a-qs1 crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE | input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd Sep 10 09:39:29 [2378] nfs1a-qs1 cib: info: cib_file_backup: Archived previous version as /var/lib/pacemaker/cib/cib-68.raw Sep 10 09:39:29 [2378] nfs1a-qs1 cib: info: cib_file_write_with_digest: Wrote version 0.77.0 of the CIB to disk (digest: ed0d56723649ac978fa191c204e70c55) Sep 10 09:39:29 [2378] nfs1a-qs1 cib: info: cib_file_write_with_digest: Reading cluster configuration file /var/lib/pacemaker/cib/cib.1Gv7Xi (digest: /var/lib/pacemaker/cib/cib.wbZZfK) Sep 10 09:39:34 [2378] nfs1a-qs1 cib: info: cib_process_ping: Reporting our current digest to nfs1aqs1: 8b73092b4ee9744fc4eaff60f8ba8388 for 0.77.0 (0x5601ffa0e990 0)
_______________________________________________ Manage your subscription: https://lists.clusterlabs.org/mailman/listinfo/users ClusterLabs home: https://www.clusterlabs.org/