Hi, I apologize for not having been clear enough.

1) Yes we have 2 postgresql servers and a bunch of databases running on these 2 
servers.

Let's call the 2 servers SERV1(172.16.173.35) and SERV2(172.16.173.36)

On SERV1, I have the following databases running :

===================================
postgres=# \l
                                   List of databases
    Name     |   Owner   | Encoding |   Collate   |    Ctype    |   Access 
privileges
-------------+-----------+----------+-------------+-------------+-----------------------
base_pilote | back_user | UTF8     | fr_FR.UTF-8 | fr_FR.UTF-8 |
essym       | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
ferrandi    | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
ferrandi_en | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
gescia      | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
isipca      | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
isipcaen    | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
itescia     | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres   
      +
             |           |          |             |             | 
postgres=CTc/postgres
lafabrique  | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
postgres    | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
supdev      | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
supdevente  | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
template0   | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres    
      +
             |           |          |             |             | 
postgres=CTc/postgres
template1   | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres    
      +
             |           |          |             |             | 
postgres=CTc/postgres
upmc        | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
===================================
    
On SERV2, I have the exact same databases running (except they are replicated 
from SERV1 with slony) :

===================================
postgres=# \l
                                   List of databases
    Name     |   Owner   | Encoding |   Collate   |    Ctype    |   Access 
privileges
-------------+-----------+----------+-------------+-------------+-----------------------
base_pilote | back_user | UTF8     | fr_FR.UTF-8 | fr_FR.UTF-8 |
essym       | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
ferrandi    | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
ferrandi_en | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
gescia      | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
isipca      | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
isipcaen    | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
itescia     | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres   
      +
             |           |          |             |             | 
postgres=CTc/postgres
lafabrique  | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
postgres    | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
supdev      | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
supdevente  | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
template0   | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres    
      +
             |           |          |             |             | 
postgres=CTc/postgres
template1   | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres    
      +
             |           |          |             |             | 
postgres=CTc/postgres
upmc        | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
===================================

Master databases are on SERV1
Slave databases are on SERV2

Whatever the master database I connect to , the same configuration is in place :

ferrandi=# select * from sl_node ;
no_id | no_active |             no_comment              | no_failed
-------+-----------+-------------------------------------+-----------
     1 | t         | Node 1 - itescia@172.16.173.35      | f
     2 | t         | Node 2 - itescia@172.16.173.36      | f
     3 | t         | Node 3 - isipca@172.16.173.35       | f
     4 | t         | Node 4 - isipca@172.16.173.36       | f
     5 | t         | Node 5 - lafabrique@172.16.173.35   | f
     6 | t         | Node 6 - lafabrique@172.16.173.36   | f
     7 | t         | Node 7 - supdev@172.16.173.35       | f
     8 | t         | Node 8 - supdev@172.16.173.36       | f
     9 | t         | Node 9 - ferrandi@172.16.173.35     | f
    10 | t         | Node 10 - ferrandi@172.16.173.36    | f
    11 | t         | Node 11 - upmc@172.16.173.35        | f
    12 | t         | Node 12 - upmc@172.16.173.36        | f
    13 | t         | Node 13 - essym@172.16.173.35       | f
    14 | t         | Node 14 - essym@172.16.173.36       | f
    15 | t         | Node 15 - gescia@172.16.173.35      | f
    16 | t         | Node 16 - gescia@172.16.173.36      | f
    17 | t         | Node 17 - isipcaen@172.16.173.35    | f
    18 | t         | Node 18 - isipcaen@172.16.173.36    | f
    19 | t         | Node 19 - ferrandi_en@172.16.173.35 | f
    20 | t         | Node 20 - ferrandi_en@172.16.173.36 | f

So basically,
Node 1 is replicated on Node 2
Node 3 is replicated on Node 4
.....
Node 19 is replicated on Node 20.


For each Master/slave I have one replication set (a large set of tables I can't 
copy/paste here because there are more than 100 tables in the set).

The databases have different names, but they have the exact same set of tables 
which is replicated on their corresponding slave database.

So there's a unique configuration file which declares Master/slave nodes as 
well as the replications sets (which are repeated, once for each master, even 
if the databases contain the exact same tables).

This configuration file has been run against each master database (node) so 
each master database slony schema contains the definition of the 20 nodes. 
However, each master database  (node) is only replicated in a single slave 
database (node).

So 
1) I don't understand why the configuration of each Master/slave nodes should 
contain the definition of the 20 nodes 
2) It leads to having for each master / slave nodes a bunch of 168 processes 
which does not seem to be used, and consume lots of connexions

select distinct
datname,
application_name,
client_addr,
substring(query from 1 for 30) query_text,
count(1) over (partition by datname) cnt_total,
count(1) over (partition by 
datname,application_name,client_addr,substring(query from 1 for 30) ) cnt
from
pg_stat_activity
order by cnt_total, datname, application_name;

   datname   |      application_name      |  client_addr  | client_hostname |   
        query_text           | cnt_total | cnt
-------------+----------------------------+---------------+-----------------+--------------------------------+-----------+-----
ferrandi_en | slon.node_19_listen        | 172.16.173.35 |                 | 
select con_origin, con_receive |        14 |  14 => Replication has been 
disabled for this database
gescia      | slon.node_15_listen        | 172.16.173.35 |                 | 
select con_origin, con_receive |        14 |  14 => Replication has been 
disabled for this database
isipcaen    | slon.node_17_listen        | 172.16.173.35 |                 | 
select con_origin, con_receive |        14 |  14 => Replication has been 
disabled for this database

ferrandi    | psql                       |               |                 | 
select * from sl_listen order  |       200 |   1
ferrandi    | slon.local_cleanup         | 172.16.173.35 |                 | 
begin;lock table "_replication |       200 |   1
ferrandi    | slon.local_listen          | 172.16.173.35 |                 | 
rollback transaction;          |       200 |   1
ferrandi    | slon.local_monitor         | 172.16.173.35 |                 | 
commit;                        |       200 |   1
ferrandi    | slon.local_sync            | 172.16.173.35 |                 | 
rollback transaction;          |       200 |   1
ferrandi    | slon.node_9_listen         | 172.16.173.35 |                 | 
select con_origin, con_receive |       200 |  13
ferrandi    | slon.origin_10_provider_9  | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  12
ferrandi    | slon.origin_11_provider_9  | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  11
ferrandi    | slon.origin_12_provider_9  | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  12
ferrandi    | slon.origin_13_provider_9  | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  11
ferrandi    | slon.origin_14_provider_9  | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  12
ferrandi    | slon.origin_1_provider_9   | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  11
ferrandi    | slon.origin_2_provider_9   | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  12
ferrandi    | slon.origin_3_provider_9   | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  11
ferrandi    | slon.origin_4_provider_9   | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  12
ferrandi    | slon.origin_5_provider_9   | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  11
ferrandi    | slon.origin_6_provider_9   | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  12
ferrandi    | slon.origin_7_provider_9   | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  11
ferrandi    | slon.origin_8_provider_9   | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  12
ferrandi    | slon.origin_9_provider_9   | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  12
ferrandi    | slon.origin_9_provider_9   | 172.16.173.35 |                 | 
select SL.seql_seqid, max(SL.s |       200 |   1
ferrandi    | slon.remoteWorkerThread_1  | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1
ferrandi    | slon.remoteWorkerThread_10 | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1
ferrandi    | slon.remoteWorkerThread_11 | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1
ferrandi    | slon.remoteWorkerThread_12 | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1
ferrandi    | slon.remoteWorkerThread_13 | 172.16.173.35 |                 | 
insert into "_replication_ccir |       200 |   1
ferrandi    | slon.remoteWorkerThread_14 | 172.16.173.35 |                 | 
insert into "_replication_ccir |       200 |   1
ferrandi    | slon.remoteWorkerThread_15 | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1
ferrandi    | slon.remoteWorkerThread_16 | 172.16.173.35 |                 | 
select "_replication_ccir".log |       200 |   1
ferrandi    | slon.remoteWorkerThread_17 | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1
ferrandi    | slon.remoteWorkerThread_18 | 172.16.173.35 |                 | 
select "_replication_ccir".log |       200 |   1
ferrandi    | slon.remoteWorkerThread_19 | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1
ferrandi    | slon.remoteWorkerThread_2  | 172.16.173.35 |                 | 
insert into "_replication_ccir |       200 |   1
ferrandi    | slon.remoteWorkerThread_20 | 172.16.173.35 |                 | 
select "_replication_ccir".log |       200 |   1
ferrandi    | slon.remoteWorkerThread_3  | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1
ferrandi    | slon.remoteWorkerThread_4  | 172.16.173.35 |                 | 
insert into "_replication_ccir |       200 |   1
ferrandi    | slon.remoteWorkerThread_5  | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1
ferrandi    | slon.remoteWorkerThread_6  | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1
ferrandi    | slon.remoteWorkerThread_7  | 172.16.173.35 |                 | 
insert into "_replication_ccir |       200 |   1
ferrandi    | slon.remoteWorkerThread_8  | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1

isipca      | slon.local_cleanup         | 172.16.173.35 |                 | 
begin;lock table "_replication |       200 |   1
isipca      | slon.local_listen          | 172.16.173.35 |                 | 
rollback transaction;          |       200 |   1
isipca      | slon.local_monitor         | 172.16.173.35 |                 | 
commit;                        |       200 |   1
isipca      | slon.local_sync            | 172.16.173.35 |                 | 
commit transaction;            |       200 |   1
isipca      | slon.node_3_listen         | 172.16.173.35 |                 | 
select con_origin, con_receive |       200 |  13
isipca      | slon.origin_10_provider_3  | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  12
isipca      | slon.origin_11_provider_3  | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  11
isipca      | slon.origin_12_provider_3  | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  12
isipca      | slon.origin_13_provider_3  | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  11
isipca      | slon.origin_14_provider_3  | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  12
isipca      | slon.origin_1_provider_3   | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  12
isipca      | slon.origin_2_provider_3   | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  12
isipca      | slon.origin_3_provider_3   | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  12
isipca      | slon.origin_3_provider_3   | 172.16.173.35 |                 | 
select SL.seql_seqid, max(SL.s |       200 |   1
isipca      | slon.origin_4_provider_3   | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  12
isipca      | slon.origin_5_provider_3   | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  11
isipca      | slon.origin_6_provider_3   | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  12
isipca      | slon.origin_7_provider_3   | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  11
isipca      | slon.origin_8_provider_3   | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  12
isipca      | slon.origin_9_provider_3   | 172.16.173.35 |                 | 
rollback transaction; set enab |       200 |  11
isipca      | slon.remoteWorkerThread_1  | 172.16.173.35 |                 | 
insert into "_replication_ccir |       200 |   1
isipca      | slon.remoteWorkerThread_10 | 172.16.173.35 |                 | 
insert into "_replication_ccir |       200 |   1
isipca      | slon.remoteWorkerThread_11 | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1
isipca      | slon.remoteWorkerThread_12 | 172.16.173.35 |                 | 
insert into "_replication_ccir |       200 |   1
isipca      | slon.remoteWorkerThread_13 | 172.16.173.35 |                 | 
insert into "_replication_ccir |       200 |   1
isipca      | slon.remoteWorkerThread_14 | 172.16.173.35 |                 | 
insert into "_replication_ccir |       200 |   1
isipca      | slon.remoteWorkerThread_15 | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1
isipca      | slon.remoteWorkerThread_16 | 172.16.173.35 |                 | 
select "_replication_ccir".log |       200 |   1
isipca      | slon.remoteWorkerThread_17 | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1
isipca      | slon.remoteWorkerThread_18 | 172.16.173.35 |                 | 
select "_replication_ccir".log |       200 |   1
isipca      | slon.remoteWorkerThread_19 | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1
isipca      | slon.remoteWorkerThread_2  | 172.16.173.35 |                 | 
insert into "_replication_ccir |       200 |   1
isipca      | slon.remoteWorkerThread_20 | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1
isipca      | slon.remoteWorkerThread_4  | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1
isipca      | slon.remoteWorkerThread_5  | 172.16.173.35 |                 | 
insert into "_replication_ccir |       200 |   1
isipca      | slon.remoteWorkerThread_6  | 172.16.173.35 |                 | 
insert into "_replication_ccir |       200 |   1
isipca      | slon.remoteWorkerThread_7  | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1
isipca      | slon.remoteWorkerThread_8  | 172.16.173.35 |                 | 
insert into "_replication_ccir |       200 |   1
isipca      | slon.remoteWorkerThread_9  | 172.16.173.35 |                 | 
select "_replication_ccir".for |       200 |   1

…
…
…
=========================

The problem to me is all these "slon.origin_XX_provider_Y" processes which 
exist for each database, while a single database is supposed to be replicated 
in only one slave database.
I don't see the reason why all these processes referencing other nodes are 
established on a database node which is not supposed to work with these other 
nodes.

And whatever the master database is, when I connect to it and query sl_node or 
sl_path, it looks like that all possible combinations are existing.

=========================
ferrandi=# select * from sl_path order by pa_server, pa_client ;
pa_server | pa_client |                                     pa_conninfo         
                             | pa_connretry
-----------+-----------+--------------------------------------------------------------------------------------+--------------
         1 |         2 | host=172.16.173.35 dbname=itescia user=slonyuser 
port=5432 password=2zG8zKUz774H     |           10
         1 |         3 | host=172.16.173.35 dbname=itescia user=slonyuser 
port=5432 password=2zG8zKUz774H     |           10
         1 |         4 | host=172.16.173.35 dbname=itescia user=slonyuser 
port=5432 password=2zG8zKUz774H     |           10
...
...
...
         1 |        17 | host=172.16.173.35 dbname=itescia user=slonyuser 
port=5432 password=2zG8zKUz774H     |           10
         1 |        18 | host=172.16.173.35 dbname=itescia user=slonyuser 
port=5432 password=2zG8zKUz774H     |           10
         1 |        19 | host=172.16.173.35 dbname=itescia user=slonyuser 
port=5432 password=2zG8zKUz774H     |           10
         1 |        20 | host=172.16.173.35 dbname=itescia user=slonyuser 
port=5432 password=2zG8zKUz774H     |           10
         2 |         1 | host=172.16.173.36 dbname=itescia user=slonyuser 
port=5432 password=2zG8zKUz774H     |           10
         2 |         3 | host=172.16.173.36 dbname=itescia user=slonyuser 
port=5432 password=2zG8zKUz774H     |           10
         2 |         4 | host=172.16.173.36 dbname=itescia user=slonyuser 
port=5432 password=2zG8zKUz774H     |           10
         2 |         5 | host=172.16.173.36 dbname=itescia user=slonyuser 
port=5432 password=2zG8zKUz774H     |           10
...
...
...
         2 |        18 | host=172.16.173.36 dbname=itescia user=slonyuser 
port=5432 password=2zG8zKUz774H     |           10
         2 |        19 | host=172.16.173.36 dbname=itescia user=slonyuser 
port=5432 password=2zG8zKUz774H     |           10
         2 |        20 | host=172.16.173.36 dbname=itescia user=slonyuser 
port=5432 password=2zG8zKUz774H     |           10
         3 |         1 | host=172.16.173.35 dbname=isipca user=slonyuser 
port=5432 password=2zG8zKUz774H      |           10
         3 |         2 | host=172.16.173.35 dbname=isipca user=slonyuser 
port=5432 password=2zG8zKUz774H      |           10
         3 |         4 | host=172.16.173.35 dbname=isipca user=slonyuser 
port=5432 password=2zG8zKUz774H      |           10
         3 |         5 | host=172.16.173.35 dbname=isipca user=slonyuser 
port=5432 password=2zG8zKUz774H      |           10
...
...
...
...
...
...
        20 |        18 | host=172.16.173.36 dbname=ferrandi_en user=slonyuser 
port=5432 password=2zG8zKUz774H |           10
        20 |        19 | host=172.16.173.36 dbname=ferrandi_en user=slonyuser 
port=5432 password=2zG8zKUz774H |           10
=========================

It basically means that each time a new database is added with it's 
corresponding slave, I have 200 more connexions, while it seems to me I should 
only have connexions related to the replication of the master and the slave 
only.

That's why I wonder if it's normal to have the same global slony replication 
configuration in each Master database slony schema, while a Master database is 
only supposed to be replicated in a specific slave database.

Thanks and Best Regards,
Olivier
_______________________________________________
Slony1-general mailing list
Slony1-general@lists.slony.info
http://lists.slony.info/mailman/listinfo/slony1-general

Reply via email to