On Fri, May 21, 2010 at 1:24 AM, Jan Wieck <[email protected]> wrote:
> > In a multi node cluster, not every node necessarily needs to be able to > talk to every other node. Let us just look at a cascaded 3 node cluster: > > 1 - 2 - 3 > > This setup requires 4 sl_path entries to work: > > server=1, client=2 > server=2, client=1 > server=2, client=3 > server=3, client=2 > > And it is supposed to generate the following sl_listen rows: > > origin=1, receiver=2, provider=1 > origin=1, receiver=3, provider=2 > origin=2, receiver=1, provider=2 > origin=2, receiver=3, provider=2 > origin=3, receiver=1, provider=2 > origin=3, receiver=2, provider=3 > > It does not matter which node is currently the origin of any set at all. > All these paths and connections are important for the health and well > being of the Slony cluster. If for example the listening for events from > 2, receiver=3 would be broken, then node 3 would still perfectly fine > replicate data originating from 1. But as soon as you move set to node > 2, it would start falling behind and you effectively lose your second > level backup. > > This is why Slony originally created a SYNC on EVERY node at least every > 10 seconds. Just so there is some harmless event passing going on to > have something to monitor and keep sl_status looking good. > > That is what got removed and that is what I think we should put back. > > This is exactly the kind of Slony black magic I want to understand. Do we have someplace where we can get these internals of Slony, or design specs; or would you suggest diving into code? Regards, -- gurjeet.singh @ EnterpriseDB - The Enterprise Postgres Company http://www.enterprisedb.com singh.gurj...@{ gmail | yahoo }.com Twitter/Skype: singh_gurjeet Mail sent from my BlackLaptop device
_______________________________________________ Slony1-general mailing list [email protected] http://lists.slony.info/mailman/listinfo/slony1-general
