Thanks Cedar, Jan, and Andy.
Actually the setup is something like this...
There are two remote servers-remoteA and remoteB.
The table of remoteA needs to be sychronized with the
table of remoteB all the time (well, there could be an interval).
remoteB will *publish* every changes and remoteA is *subscribe* to it.
These were my previous solutions:
1. Have a program (using PERL & DBI) in remoteA to connect to
remoteB and do the synchronization.
>>>>>> I can't buy this 'coz remoteB has too many *hits*.
I just can't afford the cost.
2. Have a trigger in remoteB that will output to a file the result of
every sql
or the actually sql.
>>>>>> My understanding now is that this will not do it because
of a possible transaction rollback -- thanks again.
As much as possible I want to do the synchronization
*incrementally* (just deal with the difference between remoteA & remoteB).
But I guess I have to do it the hard way.
Here's my third solution. Please comment on this.
KNOWN FACTORS:
^ poor connection
>>> the solution should be intelligent enough to handle such
situation.
3RD SOLUTION:
^ Have a script in remoteB to use pg_dump or sql copy and place it in
the
crontab. (say every 5 seconds)
^ Have a script in remoteA that will copy the dump.file from remoteB.
Place it in the crontab and use *scp* (secure copy) for the copying.
After dump.file is acquired, have another script to take care of it.
What do you think? Any better idea?
Thank you.
Sherwin
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]