albert <[email protected]> writes:
> Greetings all,
>
> I have a setup based on two virtualised server systems with postgres 8.4 and
> slony-i version 2.0.2 sources compiled insite. Both are running Ubuntu server
> edition. One system acts as a database fron-end with the database set as
> master, while the other is a slave system aimed at replicating databases from
> multiple masters.
>
> In my scenario I want to use the slave system merely as a backup resource,
> hence if the master server dies, instead of promoting the slave to a master, I
> want to recreate the master system from scratch, dumping the slave's database
> replica into it, repromote it to a master system, and link it to the slave
> server with minimum effort.
>
> I am aware this is not the most common setup, and I can't find a recipe about
> how to perform this transition. I've tried with failover and switchover
> without
> success.
>
> Any help on the easiest way to accomplish this greatly appreciated.
Sounds like what you want to do is to basically redo replication. I'm
not sure it's particularly "minimizable" :-)
What would seem sensible would be...
- Node 1 ("master") fails
- Node 2 swings into action
1. Reinstall PostgreSQL on node 1. I'd be inclined to do all of the
following, since only the final step is actually expensive, and
it covers all the likely sorts of failures
i. /sbin/mkfs.ext3 (or appropriate) for the filesystems where
DB binaries and DB data resides
ii. Copy PostgreSQL binaries from an authoritative source
iii. Run initdb to reinitialize the DB
iv. There may be some customization of pg_hba.conf,
postgresql.conf, users, and roles to do. Ideally, copy the
files from an authoritative place.
2. Drop replication from node 2
You could pretty much just drop out the Slony-I schema, and that
would do the trick.
3. pg_dump node 2, and load into the fresh DB on node 1
I'd be inclined to have this go into a file, as you'd want to
keep that file around in case there are any problems.
4. Run a pre-prepared script to rebuild the cluster.
The script "tools/slonikconfdump.sh" should create something
fairly suitable from a running cluster.
Step 1 scrapes things down fairly much to "bare metal" (well, bare
aluminium oxide :-)). It could be overkill, but all 4 steps are pretty
easy, and they address a multitude of possible reasons why node #1 might
have failed, and corruptions that might have gotten left behind by the
failure.
If you make those "overkill" bits easy, then you've automated things
that are still useful to have automated. And the process assumes that
you're backing up sorts of configuration that are mighty useful to have
backed up.
--
let name="cbbrowne" and tld="ca.afilias.info" in String.concat "@" [name;tld];;
Christopher Browne
"Bother," said Pooh, "Eeyore, ready two photon torpedoes and lock
phasers on the Heffalump, Piglet, meet me in transporter room three"
_______________________________________________
Slony1-general mailing list
[email protected]
http://lists.slony.info/mailman/listinfo/slony1-general