I've monitored the cluster log and the running processes while the script containing
rm -f /var/lib/pgsql/9.3/data/recovery.conf rm -f /var/lib/pgsql/9.3/data/ra_tmp/PGSQL.lock crm_attribute -l reboot -N $(uname -n) -n "pgsql-data-status" -v "LATEST" crm_attribute -l reboot -N $(uname -n) -n "master-pgsql" -v "1000" pcs resource cleanup pgsql pcs resource cleanup pgsql_master_slave pcs resource cleanup master-group is running and interestingly the resource agent places a recovery.conf into the the data directory which seems to put PostgreSQL into recovery mode as indicated by this: postgres 61166 1.0 2.1 336488 14664 ? S 13:35 0:00 /usr/pgsql-9.3/bin/postgres -D /var/lib/pgsql/9.3/data postgres 61192 0.0 0.1 190072 1296 ? Ss 13:35 0:00 postgres: logger process postgres 61193 0.0 0.3 336608 2076 ? Ss 13:35 0:00 postgres: startup process recovering 0000000100000000 postgres 61208 0.0 0.2 336488 1684 ? Ss 13:35 0:00 postgres: checkpointer process postgres 61209 0.0 0.2 336488 1692 ? Ss 13:35 0:00 postgres: writer process postgres 61210 0.0 0.4 343332 2916 ? Ss 13:35 0:00 postgres: wal receiver process which in turn seems to cause a timeout in the resource agent waiting for PostgreSQL to start: pgsql(pgsql)[61018]: 2014/09/18_13:36:05 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:05 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:06 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:06 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:07 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:07 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:09 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:09 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:10 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:10 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:11 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:11 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:12 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:12 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:13 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:13 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:14 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:14 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:16 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:16 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:17 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:17 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:18 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:18 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:19 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:19 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:20 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:20 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:21 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:21 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:22 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:22 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:24 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:24 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:25 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:25 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:26 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:26 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:27 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:27 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:28 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:28 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:29 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:29 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:30 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:31 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:32 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:32 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:33 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:33 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:34 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:34 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:35 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:35 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:36 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:36 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:37 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:37 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:39 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:39 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:40 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:40 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. pgsql(pgsql)[61018]: 2014/09/18_13:36:41 WARNING: PostgreSQL template1 isn't running pgsql(pgsql)[61018]: 2014/09/18_13:36:41 WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. Sep 18 13:36:41 [51045] node1 lrmd: warning: child_timeout_callback: pgsql_start_0 process (PID 61018) timed out Sep 18 13:36:41 [51045] node1 lrmd: warning: operation_finished: pgsql_start_0:61018 - timed out after 60000ms Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: notice: operation_finished: pgsql_start_0:61018:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:41 [51045] node1 lrmd: info: log_finished: finished - rsc:pgsql action:start call_id:160 pid:61018 exit-code:1 exec-time:60037ms queue-time:0ms Sep 18 13:36:41 [51045] node1 lrmd: info: process_lrmd_get_rsc_info: Resource 'pgsql' not found (2 active resources) Sep 18 13:36:41 [51048] node1 crmd: error: process_lrm_event: LRM operation pgsql_start_0 (160) Timed Out (timeout=60000ms) Sep 18 13:36:41 [51048] node1 crmd: warning: do_update_resource: Resource pgsql no longer exists in the lrmd Sep 18 13:36:41 [51048] node1 crmd: warning: status_from_rc: Action 6 (pgsql_start_0) on node1 failed (target: 0 vs. rc: 1): Error Sep 18 13:36:41 [51048] node1 crmd: warning: update_failcount: Updating failcount for pgsql on node1 after failed start: rc=1 (update=INFINITY, time=1411061801) Sep 18 13:36:41 [51048] node1 crmd: info: abort_transition_graph: match_graph_event:313 - Triggered transition abort (complete=0, node=node1, tag=lrm_rsc_op, id=pgsql_last_failure_0, magic=2:1;6:40:0:2b32876f-fe95-470b-b770-9c34a79944e9) : Event failed Sep 18 13:36:41 [51048] node1 crmd: info: match_graph_event: Action pgsql_start_0 (6) confirmed on node1 (rc=4) Sep 18 13:36:41 [51048] node1 crmd: warning: update_failcount: Updating failcount for pgsql on node1 after failed start: rc=1 (update=INFINITY, time=1411061801) Sep 18 13:36:41 [51048] node1 crmd: info: process_graph_event: Detected action (40.6) pgsql_start_0.160=unknown error: failed Sep 18 13:36:41 [51048] node1 crmd: notice: process_lrm_event: node1-pgsql_start_0:160 [ psql: FATAL: the database system is starting up\npsql: FATAL: the database system is starting up\npsql: FATAL: the database system is starting up\npsql: FATAL: the database system is starting up\npsql: FATAL: the database system is starting up\npsql: FATAL: the database system is starting up\npsql: FATAL: the database system is starting up\npsql: FATAL: the database system is starting up\npsql: FATAL: Sep 18 13:36:41 [51048] node1 crmd: info: process_lrm_event: Deletion of resource 'pgsql' complete after pgsql_start_0 Sep 18 13:36:41 [51048] node1 crmd: info: notify_deleted: Notifying f943b2e1-e9bd-435f-a8c6-31d28b7ba2af on node1 that pgsql was deleted Sep 18 13:36:41 [51046] node1 attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-pgsql (INFINITY) Sep 18 13:36:41 [51043] node1 cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='node1']//lrm_resource[@id='pgsql']: OK (rc=0, origin=local/crmd/208, version=50.81.9) Sep 18 13:36:41 [51043] node1 cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='1']//transient_attributes//nvpair[@name='fail-count-pgsql']: No such device or address (rc=-6, origin=local/attrd/105, version=50.81.9) Sep 18 13:36:41 [51046] node1 attrd: notice: attrd_perform_update: Sent update 107: fail-count-pgsql=INFINITY Sep 18 13:36:41 [51046] node1 attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-pgsql (1411061801) Sep 18 13:36:41 [51043] node1 cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='1']//transient_attributes//nvpair[@name='last-failure-pgsql']: OK (rc=0, origin=local/attrd/108, version=50.81.10) Sep 18 13:36:41 [51046] node1 attrd: notice: attrd_perform_update: Sent update 109: last-failure-pgsql=1411061801 Sep 18 13:36:41 [51046] node1 attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-pgsql (INFINITY) Sep 18 13:36:41 [51043] node1 cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='1']//transient_attributes//nvpair[@name='fail-count-pgsql']: OK (rc=0, origin=local/attrd/110, version=50.81.11) Sep 18 13:36:41 [51046] node1 attrd: notice: attrd_perform_update: Sent update 111: fail-count-pgsql=INFINITY Sep 18 13:36:41 [51046] node1 attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-pgsql (1411061801) Sep 18 13:36:41 [51043] node1 cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='1']//transient_attributes//nvpair[@name='last-failure-pgsql']: OK (rc=0, origin=local/attrd/112, version=50.81.11) Sep 18 13:36:41 [51046] node1 attrd: notice: attrd_perform_update: Sent update 113: last-failure-pgsql=1411061801 Sep 18 13:36:41 [51048] node1 crmd: info: abort_transition_graph: te_update_diff:258 - Triggered transition abort (complete=0, node=node1, tag=lrm_rsc_op, id=pgsql_last_0, magic=0:7;4:39:7:2b32876f-fe95-470b-b770-9c34a79944e9, cib=50.81.9) : Resource op removal Sep 18 13:36:41 [51048] node1 crmd: info: abort_transition_graph: te_update_diff:172 - Triggered transition abort (complete=0, node=node1, tag=nvpair, id=status-1-fail-count-pgsql, name=fail-count-pgsql, value=INFINITY, magic=NA, cib=50.81.10) : Transient attribute: update Sep 18 13:36:41 [51048] node1 crmd: info: abort_transition_graph: te_update_diff:172 - Triggered transition abort (complete=0, node=node1, tag=nvpair, id=status-1-last-failure-pgsql, name=last-failure-pgsql, value=1411061801, magic=NA, cib=50.81.11) : Transient attribute: update Sep 18 13:36:41 [51048] node1 crmd: notice: te_rsc_command: Initiating action 46: notify pgsql_post_notify_start_0 on node1 (local) Sep 18 13:36:41 [51045] node1 lrmd: info: process_lrmd_get_rsc_info: Resource 'pgsql' not found (2 active resources) Sep 18 13:36:41 [51045] node1 lrmd: info: process_lrmd_get_rsc_info: Resource 'pgsql:0' not found (2 active resources) Sep 18 13:36:41 [51045] node1 lrmd: info: process_lrmd_rsc_register: Added 'pgsql' to the rsc list (3 active resources) Sep 18 13:36:41 [51048] node1 crmd: info: do_lrm_rsc_op: Performing key=46:40:0:2b32876f-fe95-470b-b770-9c34a79944e9 op=pgsql_notify_0 Sep 18 13:36:41 [51045] node1 lrmd: info: log_execute: executing - rsc:pgsql action:notify call_id:167 Sep 18 13:36:41 [51043] node1 cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='node1']//lrm_resource[@id='pgsql']: OK (rc=0, origin=local/crmd/211, version=50.82.1) Sep 18 13:36:42 [51045] node1 lrmd: info: process_lrmd_get_rsc_info: Resource 'pgsql:0' not found (3 active resources) Sep 18 13:36:42 [51048] node1 crmd: info: notify_deleted: Notifying f4bfc05a-c58c-481b-ab96-0155c3f4f372 on node1 that pgsql:0 was deleted Sep 18 13:36:42 [51043] node1 cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='node1']//lrm_resource[@id='pgsql:0']: OK (rc=0, origin=local/crmd/213, version=50.82.1) Sep 18 13:36:42 [51045] node1 lrmd: info: process_lrmd_get_rsc_info: Resource 'pgsql:1' not found (3 active resources) Sep 18 13:36:42 [51048] node1 crmd: info: notify_deleted: Notifying f4bfc05a-c58c-481b-ab96-0155c3f4f372 on node1 that pgsql:1 was deleted Sep 18 13:36:42 [51043] node1 cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='node1']//lrm_resource[@id='pgsql:1']: OK (rc=0, origin=local/crmd/216, version=50.83.1) Sep 18 13:36:42 [51048] node1 crmd: info: delete_resource: Removing resource pgsql_vip_rep for ae8b028f-ff93-4228-afa2-08647279ef05 (internal) on node1 Sep 18 13:36:42 [51048] node1 crmd: info: notify_deleted: Notifying ae8b028f-ff93-4228-afa2-08647279ef05 on node1 that pgsql_vip_rep was deleted Sep 18 13:36:42 [51043] node1 cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='node1']//lrm_resource[@id='pgsql_vip_rep']: OK (rc=0, origin=local/crmd/220, version=50.83.2) Sep 18 13:36:42 [51048] node1 crmd: info: abort_transition_graph: te_update_diff:258 - Triggered transition abort (complete=0, node=node1, tag=lrm_rsc_op, id=pgsql_vip_rep_last_0, magic=0:7;4:40:7:2b32876f-fe95-470b-b770-9c34a79944e9, cib=50.83.2) : Resource op removal Sep 18 13:36:42 [51048] node1 crmd: info: delete_resource: Removing resource pgsql_forward_listen_port for ae8b028f-ff93-4228-afa2-08647279ef05 (internal) on node1 Sep 18 13:36:42 [51048] node1 crmd: info: notify_deleted: Notifying ae8b028f-ff93-4228-afa2-08647279ef05 on node1 that pgsql_forward_listen_port was deleted Sep 18 13:36:42 [51043] node1 cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='node1']//lrm_resource[@id='pgsql_forward_listen_port']: OK (rc=0, origin=local/crmd/223, version=50.83.3) Sep 18 13:36:42 [51048] node1 crmd: info: abort_transition_graph: te_update_diff:258 - Triggered transition abort (complete=0, node=node1, tag=lrm_rsc_op, id=pgsql_forward_listen_port_last_0, magic=0:7;5:40:7:2b32876f-fe95-470b-b770-9c34a79944e9, cib=50.83.3) : Resource op removal Sep 18 13:36:42 [51045] node1 lrmd: info: log_finished: finished - rsc:pgsql action:notify call_id:167 pid:62615 exit-code:0 exec-time:783ms queue-time:0ms Sep 18 13:36:42 [51048] node1 crmd: info: match_graph_event: Action pgsql_notify_0 (46) confirmed on node1 (rc=0) Sep 18 13:36:42 [51048] node1 crmd: notice: process_lrm_event: LRM operation pgsql_notify_0 (call=167, rc=0, cib-update=0, confirmed=true) ok Sep 18 13:36:42 [51047] node1 pengine: info: clone_print: Master/Slave Set: pgsql_master_slave [pgsql] Sep 18 13:36:42 [51047] node1 pengine: info: native_print: pgsql_vip_rep (ocf::heartbeat:IPaddr2): Stopped Sep 18 13:36:42 [51047] node1 pengine: info: native_print: pgsql_forward_listen_port (ocf::heartbeat:portforward): Stopped Sep 18 13:36:42 [51047] node1 pengine: info: get_failcount_full: pgsql_master_slave has failed INFINITY times on node1 Sep 18 13:36:42 [51047] node1 pengine: warning: common_apply_stickiness: Forcing pgsql_master_slave away from node1 after 1000000 failures (max=1) Sep 18 13:36:42 [51047] node1 pengine: info: get_failcount_full: pgsql_master_slave has failed INFINITY times on node1 Sep 18 13:36:42 [51047] node1 pengine: warning: common_apply_stickiness: Forcing pgsql_master_slave away from node1 after 1000000 failures (max=1) Sep 18 13:36:42 [51047] node1 pengine: info: rsc_merge_weights: pgsql_master_slave: Rolling back scores from pgsql_vip_rep Sep 18 13:36:42 [51047] node1 pengine: info: native_color: Resource pgsql:0 cannot run anywhere Sep 18 13:36:42 [51047] node1 pengine: info: native_color: Resource pgsql:1 cannot run anywhere Sep 18 13:36:42 [51047] node1 pengine: info: rsc_merge_weights: pgsql_master_slave: Rolling back scores from pgsql_vip_rep Sep 18 13:36:42 [51047] node1 pengine: info: master_color: pgsql_master_slave: Promoted 0 instances of a possible 1 to master Sep 18 13:36:42 [51047] node1 pengine: info: rsc_merge_weights: pgsql_vip_rep: Rolling back scores from pgsql_forward_listen_port Sep 18 13:36:42 [51047] node1 pengine: info: native_color: Resource pgsql_vip_rep cannot run anywhere Sep 18 13:36:42 [51047] node1 pengine: info: native_color: Resource pgsql_forward_listen_port cannot run anywhere Sep 18 13:36:42 [51047] node1 pengine: info: LogActions: Leave pgsql:0 (Stopped) Sep 18 13:36:42 [51047] node1 pengine: info: LogActions: Leave pgsql:1 (Stopped) Sep 18 13:36:42 [51047] node1 pengine: info: LogActions: Leave pgsql_vip_rep (Stopped) Sep 18 13:36:42 [51047] node1 pengine: info: LogActions: Leave pgsql_forward_listen_port (Stopped) Sep 18 13:36:42 [51048] node1 crmd: notice: te_rsc_command: Initiating action 4: monitor pgsql:0_monitor_0 on node1 (local) Sep 18 13:36:42 [51048] node1 crmd: info: do_lrm_rsc_op: Performing key=4:41:7:2b32876f-fe95-470b-b770-9c34a79944e9 op=pgsql_monitor_0 Sep 18 13:36:42 [51048] node1 crmd: notice: te_rsc_command: Initiating action 5: monitor pgsql_vip_rep_monitor_0 on node1 (local) Sep 18 13:36:42 [51045] node1 lrmd: info: process_lrmd_get_rsc_info: Resource 'pgsql_vip_rep' not found (1 active resources) Sep 18 13:36:42 [51045] node1 lrmd: info: process_lrmd_rsc_register: Added 'pgsql_vip_rep' to the rsc list (2 active resources) Sep 18 13:36:42 [51048] node1 crmd: info: do_lrm_rsc_op: Performing key=5:41:7:2b32876f-fe95-470b-b770-9c34a79944e9 op=pgsql_vip_rep_monitor_0 Sep 18 13:36:42 [51048] node1 crmd: notice: te_rsc_command: Initiating action 6: monitor pgsql_forward_listen_port_monitor_0 on node1 (local) Sep 18 13:36:42 [51045] node1 lrmd: info: process_lrmd_get_rsc_info: Resource 'pgsql_forward_listen_port' not found (2 active resources) Sep 18 13:36:42 [51045] node1 lrmd: info: process_lrmd_rsc_register: Added 'pgsql_forward_listen_port' to the rsc list (3 active resources) Sep 18 13:36:42 [51048] node1 crmd: info: do_lrm_rsc_op: Performing key=6:41:7:2b32876f-fe95-470b-b770-9c34a79944e9 op=pgsql_forward_listen_port_monitor_0 Sep 18 13:36:42 [51048] node1 crmd: notice: process_lrm_event: LRM operation pgsql_forward_listen_port_monitor_0 (call=180, rc=7, cib-update=227, confirmed=true) not running Sep 18 13:36:42 [51048] node1 crmd: notice: process_lrm_event: node1-pgsql_forward_listen_port_monitor_0:180 [ portforward REDIRECT rule for OUTPUT chain [tcp 5433 5432] is inactive\n ] Sep 18 13:36:42 [51048] node1 crmd: info: match_graph_event: Action pgsql_forward_listen_port_monitor_0 (6) confirmed on node1 (rc=0) pgsql(pgsql)[62677]: 2014/09/18_13:36:42 INFO: Don't check /var/lib/pgsql/9.3/data/ during probe Sep 18 13:36:42 [51048] node1 crmd: notice: process_lrm_event: LRM operation pgsql_vip_rep_monitor_0 (call=176, rc=7, cib-update=228, confirmed=true) not running Sep 18 13:36:42 [51048] node1 crmd: info: match_graph_event: Action pgsql_vip_rep_monitor_0 (5) confirmed on node1 (rc=0) pgsql(pgsql)[62677]: 2014/09/18_13:36:43 ERROR: PostgreSQL template1 isn't running pgsql(pgsql)[62677]: 2014/09/18_13:36:43 ERROR: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command. Sep 18 13:36:43 [51045] node1 lrmd: notice: operation_finished: pgsql_monitor_0:62677:stderr [ psql: FATAL: the database system is starting up ] Sep 18 13:36:43 [51048] node1 crmd: notice: process_lrm_event: LRM operation pgsql_monitor_0 (call=172, rc=1, cib-update=229, confirmed=true) unknown error Sep 18 13:36:43 [51048] node1 crmd: notice: process_lrm_event: node1-pgsql_monitor_0:172 [ psql: FATAL: the database system is starting up\n ] Sep 18 13:36:43 [51048] node1 crmd: warning: status_from_rc: Action 4 (pgsql:0_monitor_0) on node1 failed (target: 7 vs. rc: 1): Error Sep 18 13:36:43 [51048] node1 crmd: info: abort_transition_graph: match_graph_event:313 - Triggered transition abort (complete=0, node=node1, tag=lrm_rsc_op, id=pgsql_last_failure_0, magic=0:1;4:41:7:2b32876f-fe95-470b-b770-9c34a79944e9, cib=50.83.6) : Event failed Sep 18 13:36:43 [51048] node1 crmd: info: match_graph_event: Action pgsql_monitor_0 (4) confirmed on node1 (rc=4) Sep 18 13:36:43 [51048] node1 crmd: info: process_graph_event: Detected action (41.4) pgsql_monitor_0.172=unknown error: failed Sep 18 13:36:43 [51047] node1 pengine: warning: unpack_rsc_op: Processing failed op monitor for pgsql:0 on node1: unknown error (1) Sep 18 13:36:43 [51047] node1 pengine: info: clone_print: Master/Slave Set: pgsql_master_slave [pgsql] Sep 18 13:36:43 [51047] node1 pengine: info: native_print: pgsql (ocf::heartbeat:pgsql): FAILED node1 Sep 18 13:36:43 [51047] node1 pengine: info: native_print: pgsql_vip_rep (ocf::heartbeat:IPaddr2): Stopped Sep 18 13:36:43 [51047] node1 pengine: info: native_print: pgsql_forward_listen_port (ocf::heartbeat:portforward): Stopped Sep 18 13:36:43 [51047] node1 pengine: info: get_failcount_full: pgsql:0 has failed INFINITY times on node1 Sep 18 13:36:43 [51047] node1 pengine: warning: common_apply_stickiness: Forcing pgsql_master_slave away from node1 after 1000000 failures (max=1) Sep 18 13:36:43 [51047] node1 pengine: info: get_failcount_full: pgsql_master_slave has failed INFINITY times on node1 Sep 18 13:36:43 [51047] node1 pengine: warning: common_apply_stickiness: Forcing pgsql_master_slave away from node1 after 1000000 failures (max=1) Sep 18 13:36:43 [51047] node1 pengine: info: rsc_merge_weights: pgsql_master_slave: Rolling back scores from pgsql_vip_rep Sep 18 13:36:43 [51047] node1 pengine: info: native_color: Resource pgsql:1 cannot run anywhere Sep 18 13:36:43 [51047] node1 pengine: info: native_color: Resource pgsql:0 cannot run anywhere Sep 18 13:36:43 [51047] node1 pengine: info: rsc_merge_weights: pgsql_master_slave: Rolling back scores from pgsql_vip_rep Sep 18 13:36:43 [51047] node1 pengine: info: master_color: pgsql_master_slave: Promoted 0 instances of a possible 1 to master Sep 18 13:36:43 [51047] node1 pengine: info: rsc_merge_weights: pgsql_vip_rep: Rolling back scores from pgsql_forward_listen_port Sep 18 13:36:43 [51047] node1 pengine: info: native_color: Resource pgsql_vip_rep cannot run anywhere Sep 18 13:36:43 [51047] node1 pengine: info: native_color: Resource pgsql_forward_listen_port cannot run anywhere Sep 18 13:36:43 [51047] node1 pengine: notice: LogActions: Stop pgsql:0 (node1) Sep 18 13:36:43 [51047] node1 pengine: info: LogActions: Leave pgsql:1 (Stopped) Sep 18 13:36:43 [51047] node1 pengine: info: LogActions: Leave pgsql_vip_rep (Stopped) Sep 18 13:36:43 [51047] node1 pengine: info: LogActions: Leave pgsql_forward_listen_port (Stopped) Sep 18 13:36:43 [51048] node1 crmd: notice: te_rsc_command: Initiating action 42: notify pgsql_pre_notify_stop_0 on node1 (local) Sep 18 13:36:43 [51048] node1 crmd: info: do_lrm_rsc_op: Performing key=42:42:0:2b32876f-fe95-470b-b770-9c34a79944e9 op=pgsql_notify_0 Sep 18 13:36:43 [51045] node1 lrmd: info: log_execute: executing - rsc:pgsql action:notify call_id:181 Sep 18 13:36:43 [51045] node1 lrmd: info: log_finished: finished - rsc:pgsql action:notify call_id:181 pid:62802 exit-code:0 exec-time:212ms queue-time:0ms Sep 18 13:36:43 [51048] node1 crmd: info: match_graph_event: Action pgsql_notify_0 (42) confirmed on node1 (rc=0) Sep 18 13:36:43 [51048] node1 crmd: notice: process_lrm_event: LRM operation pgsql_notify_0 (call=181, rc=0, cib-update=0, confirmed=true) ok Sep 18 13:36:43 [51048] node1 crmd: notice: te_rsc_command: Initiating action 1: stop pgsql_stop_0 on node1 (local) Sep 18 13:36:43 [51048] node1 crmd: info: do_lrm_rsc_op: Performing key=1:42:0:2b32876f-fe95-470b-b770-9c34a79944e9 op=pgsql_stop_0 Sep 18 13:36:43 [51045] node1 lrmd: info: log_execute: executing - rsc:pgsql action:stop call_id:182 Sep 18 13:36:43 [51046] node1 attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: master-pgsql (-INFINITY) Sep 18 13:36:43 [51043] node1 cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='1']//transient_attributes//nvpair[@name='master-pgsql']: OK (rc=0, origin=local/attrd/114, version=50.83.6) Sep 18 13:36:43 [51046] node1 attrd: notice: attrd_perform_update: Sent update 115: master-pgsql=-INFINITY Sep 18 13:36:43 [51048] node1 crmd: info: abort_transition_graph: te_update_diff:172 - Triggered transition abort (complete=0, node=node1, tag=nvpair, id=status-1-master-pgsql, name=master-pgsql, value=-INFINITY, magic=NA, cib=50.83.7) : Transient attribute: update pgsql(pgsql)[62853]: 2014/09/18_13:36:45 INFO: waiting for server to shut down..... done server stopped pgsql(pgsql)[62853]: 2014/09/18_13:36:45 INFO: PostgreSQL is down Sep 18 13:36:45 [51043] node1 cib: info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='1']//transient_attributes//nvpair[@name='pgsql-status']: OK (rc=0, origin=local/crm_attribute/3, version=50.83.7) Sep 18 13:36:45 [51045] node1 lrmd: info: log_finished: finished - rsc:pgsql action:stop call_id:182 pid:62853 exit-code:0 exec-time:2573ms queue-time:0ms Sep 18 13:36:45 [51048] node1 crmd: notice: process_lrm_event: LRM operation pgsql_stop_0 (call=182, rc=0, cib-update=231, confirmed=true) ok Sep 18 13:36:45 [51048] node1 crmd: info: match_graph_event: Action pgsql_stop_0 (1) confirmed on node1 (rc=0) Sep 18 13:36:45 [51047] node1 pengine: warning: unpack_rsc_op: Processing failed op monitor for pgsql:0 on node1: unknown error (1) Sep 18 13:36:45 [51047] node1 pengine: info: clone_print: Master/Slave Set: pgsql_master_slave [pgsql] Sep 18 13:36:45 [51047] node1 pengine: info: native_print: pgsql_vip_rep (ocf::heartbeat:IPaddr2): Stopped Sep 18 13:36:45 [51047] node1 pengine: info: native_print: pgsql_forward_listen_port (ocf::heartbeat:portforward): Stopped Sep 18 13:36:45 [51047] node1 pengine: info: get_failcount_full: pgsql:0 has failed INFINITY times on node1 Sep 18 13:36:45 [51047] node1 pengine: warning: common_apply_stickiness: Forcing pgsql_master_slave away from node1 after 1000000 failures (max=1) Sep 18 13:36:45 [51047] node1 pengine: info: get_failcount_full: pgsql_master_slave has failed INFINITY times on node1 Sep 18 13:36:45 [51047] node1 pengine: warning: common_apply_stickiness: Forcing pgsql_master_slave away from node1 after 1000000 failures (max=1) Sep 18 13:36:45 [51047] node1 pengine: info: rsc_merge_weights: pgsql_master_slave: Rolling back scores from pgsql_vip_rep Sep 18 13:36:45 [51047] node1 pengine: info: native_color: Resource pgsql:0 cannot run anywhere Sep 18 13:36:45 [51047] node1 pengine: info: native_color: Resource pgsql:1 cannot run anywhere Sep 18 13:36:45 [51047] node1 pengine: info: rsc_merge_weights: pgsql_master_slave: Rolling back scores from pgsql_vip_rep Sep 18 13:36:45 [51047] node1 pengine: info: master_color: pgsql_master_slave: Promoted 0 instances of a possible 1 to master Sep 18 13:36:45 [51047] node1 pengine: info: rsc_merge_weights: pgsql_vip_rep: Rolling back scores from pgsql_forward_listen_port Sep 18 13:36:45 [51047] node1 pengine: info: native_color: Resource pgsql_vip_rep cannot run anywhere Sep 18 13:36:45 [51047] node1 pengine: info: native_color: Resource pgsql_forward_listen_port cannot run anywhere Sep 18 13:36:45 [51047] node1 pengine: info: LogActions: Leave pgsql:0 (Stopped) Sep 18 13:36:45 [51047] node1 pengine: info: LogActions: Leave pgsql:1 (Stopped) Sep 18 13:36:45 [51047] node1 pengine: info: LogActions: Leave pgsql_vip_rep (Stopped) Sep 18 13:36:45 [51047] node1 pengine: info: LogActions: Leave pgsql_forward_listen_port (Stopped) -- View this message in context: http://linux-ha.996297.n3.nabble.com/Unable-to-start-any-node-of-pgsql-Master-Slave-Cluster-tp15816p15818.html Sent from the Linux-HA mailing list archive at Nabble.com. _______________________________________________ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems