707680-2016-09-11 16:37:50 PDT INFO   remoteWorkerThread_2: SYNC 5002145332
done in 0.018 seconds

*707771:2016-09-11 16:37:50 PDT WARN   remoteWorkerThread_2: got DROP NODE
for local node ID*

707856-2016-09-11 16:38:21 PDT INFO   remoteWorkerThread_5: SYNC 5002157926
done in 0.021 seconds

At this point it continues on it's voyage of trying to replicate (from an
add node), but I'm getting a failure so I'm trying to drop so that my
origin can truncate it's logs.

Due to the failure listed below:

NOTICE:  truncate of "torque"."impressions" succeeded

  2016-09-11 16:35:22 PDT CONFIG remoteWorkerThread_1: 5500.917 seconds to
copy table "torque"."impressions"

2016-09-11 16:35:22 PDT CONFIG remoteWorkerThread_1: copy table
"torque"."impressions_archive"

2016-09-11 16:35:23 PDT CONFIG remoteWorkerThread_1: Begin COPY of table
"torque"."impressions_archive"

*2016-09-11 16:35:23 PDT ERROR  remoteWorkerThread_1: "select
"_cls".copyFields(237);"*

2016-09-11 16:35:23 PDT WARN   remoteWorkerThread_1: data copy for set 2
failed 2 times - sleep 30 seconds


I've seen the behavior with the failing set and I think we determined that
the network between the origin and the new node, was somehow killing the
connection, so we decided to remove the index creation step on the new
node, so that it was just doing a postgres copy and would move on to the
next, not leaving enough idle time for any idle timeouts or other to kick
in and kill the connection.. But I am not seeing that happen in this
situation, but the set 2 continues to fail with the erro*r 237*

The commands continue after seeing the DROP command indicated earlier

2016-09-11 16:56:48 PDT CONFIG remoteWorkerThread_1: 25955276171 bytes
copied for table "torque"."impressions_daily"

2016-09-11 17:03:52 PDT CONFIG remoteWorkerThread_1: 1620.812 seconds to
copy table "torque"."impressions_daily"

2016-09-11 17:03:52 PDT CONFIG remoteWorkerThread_1: copy table
"torque"."impressions"

2016-09-11 17:03:52 PDT CONFIG remoteWorkerThread_1: Begin COPY of table
"torque"."impressions"

At this point it has ignored the DROP :) and is attempting to copy the
torque.impressions table , which it completed the _daily table previously,
but started , or kept on with the data copy's even after the DROP was
signaled I expect a Set 2 fail with the same error in about an hour.. But I
need it to stop and not ignore the DROP command.

*Thanks*

*Tory*
_______________________________________________
Slony1-general mailing list
Slony1-general@lists.slony.info
http://lists.slony.info/mailman/listinfo/slony1-general

Reply via email to