Another thing that I noticed is that there is only one slon process running
for node 3 instead of two.



On Thu, Oct 30, 2014 at 2:55 PM, Granthana Biswas <
granthana.bis...@gmail.com> wrote:

> Also the nodelock table on node1 and 2 have too many rows for node 3:
>
> select count(*) from "_cluster".sl_nodelock where nl_nodeid =3;
>  count
> -------
>     67
>
>
> On Thu, Oct 30, 2014 at 2:29 PM, Granthana Biswas <
> granthana.bis...@gmail.com> wrote:
>
>> Hi All,
>>
>> My replication setup is as  db1 -> db2 -> db3.
>>
>> On adding a new set to the cluster, the merge from node 3 is going on
>> waiting state for node 3 to subscribe. Because of this, node 3 is lagging
>> behind.
>>
>> These are the slonik commands that I used to add new set, subscribe and
>> merge:
>>
>> ---------------------------------------------------------------------------------------------------------
>> create set ( id = 2, origin = 1, comment = 'replication set for surcharge
>> table');
>>
>> set add table (set id = 2, origin = 1, id = 1744, fully qualified name =
>> 'public.t2', comment = 't2 table');
>> set add sequence (set id = 2, origin = 1, id = 1756, fully qualified name
>> = 'public.s2', comment = 's2 sequence');
>>
>> subscribe set ( id = 2, provider = 1, receiver = 2, forward = yes );
>> subscribe set ( id = 2, provider = 2, receiver = 3, forward = yes );
>>
>> merge set ( id = 1, add id = 2, origin = 1);
>>
>> -----------------------------------------------------------------------------------------------------------
>>
>> Even though it goes it waiting mode, the sl_subscribe table shows the
>> following:
>>
>>  sub_set | sub_provider | sub_receiver | sub_forward | sub_active
>> ---------+--------------+--------------+-------------+------------
>>        1 |            1 |            2 | t           | t
>>        1 |            2 |            3 | t           | t
>>        2 |            1 |            2 | t           | t
>>        2 |            2 |            3 | t           | t
>>
>>
>> But the slony log on node 3 shows the following repeatedly:
>>
>> NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=29117
>> NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=30115
>> NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=30116
>> NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=30414
>> NOTICE:  Slony-I: Logswitch to sl_log_2 initiated
>> CONTEXT:  SQL statement "SELECT  "_cluster".logswitch_start()"
>> PL/pgSQL function "cleanupevent" line 96 at PERFORM
>> NOTICE:  truncate of <NULL> failed - doing delete
>> NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=31364
>> NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=31369
>> NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=31368
>> NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=32300
>> NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=1117
>> NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=1149
>> NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=1186
>> NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=1247
>> NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=1270
>> NOTICE:  Slony-I: cleanup stale sl_nodelock entry for pid=1294
>>
>>
>> It is continuously trying to cleanup stale nodelock entry for nl_nodeid=3
>> and nl_conncnt=0.
>>
>> I tried stopping and starting the slon process for node 3 which didn't
>> help. I don't see any errors in the other slony log files.
>>
>> Do I have to stop all slon processes of all nodes and start again?
>>
>> Regards,
>> Granthana
>>
>
>
_______________________________________________
Slony1-general mailing list
Slony1-general@lists.slony.info
http://lists.slony.info/mailman/listinfo/slony1-general

Reply via email to