I have Slony (1.2.x) running on our production cluster, where it has been
running for a few years now. The flow of data looks like this:

master -|----> slave1
        |----> slave2
        |----> slave3
         ...
        |----> slave9

Data always flows from master to each identically configured slave. There is
never any data replication between slaves or from slaves to master. All
one-way, all down-stream, all the time.

As the load on our service has grown, I've begun to see node lag counts
start to grow when machines get busy. Based on my understanding of the
documentation when the cluster was originally configured, I added storage
paths from every node to every other node (master --> slave1-9, salve1 -->
master + save2-9, etc.).

Question #1: If my flow of data is always/only from master to each slave,
can I remove the storage path's between slaves, leaving me only with master
--> slaveN and slaveN --> master? Would this decrease the communication
overhead or be beneficial in general?

Question #2: When initially configured, the DSN connection strings used to
define each node used IP addresses that were part of a 1Gbit network. Since
then, each of these machines has had an additional 10Gbit network connection
added to it. Would it be safe to stop the slon daemons, manually update the
sl_path.pa_conninfo column values to use the IP addresses of these new
network interfaces, then restart slony daemons?

Question #3: Are there other configuration parameters I can use to improve
the overall performance of the cluster?

Thanks,
jason
-- 
Jason L. Buberel
[email protected]
_______________________________________________
Slony1-general mailing list
[email protected]
http://lists.slony.info/mailman/listinfo/slony1-general

Reply via email to