Darcy Buskermolen wrote:
> On Tuesday 13 February 2007 12:51, Mikko Partio wrote:
>   
>> 2) In my opinion slony could have a better monitoring/maintaining
>> interface. Many of the recent posts at the list have been dealing with
>> this issue. For example with the slonc-approach explained above, you
>> could have slonc --ping <clustername>, which would "ping" all nodes in a
>> cluster. Also, it would be great if slonik could guarantee that if a
>> command is executed at one node, it is executed at all nodes -- like a
>> slonik transaction. I have had many cases where for example a ddl
>> command executed with slonik's "execute script" has been succesfully
>> executed at master, but has failed at slaves. The only solution I have
>> come up so far for this situation is "delete * from _cluster.sl_event
>> where ev_type <> 'SYNC'".
>>     
>
> With out employing 2 phase commit to do this, it's not a simple thing to 
> accomplish, especially if we want to maintain backwards compatibility with 
> olderversions of PostgreSQL as that they do not offer 2pc.
>   

When I was brainstorming on things to discuss at the Slony-I session at
the 10th anniversary conference, Drew Hammond suggested the thought that
"maybe 2PC could be useful for something"; while that's not very
specific, I was happy to bring the idea up as sometimes extra eyes can
discover the use that I didn't perceive.

The reaction was pretty universally negative; it looked as though this
would introduce more failure modes and make for bigger "things that have
big ugly locks around them", which wasn't terribly attractive.

That view seems pretty fair; one of the big merits of Slony-I is that it
is an asynchronous system, which allows us to avoid the need to lock
things.  Jumping back into the "locking snakepit" is throwing away that
merit.

I don't want to simply pooh-pooh ideas, but I don't see the "slonc"
direction going very far.

There is one bit that *does* fit nicely into upcoming development is the
notion of having a "ping" operation; there is merit to adding a "PING"
event that does something of a "chain letter" thing, travelling from
node to node, collecting routing information as it goes, and reporting
that in the logs.  I have discussed that with several people, and ought
to see about implementing it in CVS HEAD.

http://gborg.postgresql.org/project/slony1/bugs/bugupdate.php?1413

But using 2PC sounds like a non-starter, to me...
_______________________________________________
Slony1-general mailing list
[email protected]
http://gborg.postgresql.org/mailman/listinfo/slony1-general

Reply via email to