Alvaro Herrera wrote:

On Fri, Nov 21, 2003 at 09:38:50AM +0800, Christopher Kings-Lynne wrote:
>Yeah, I think the main issue in all this is that for real production
>sites, upgrading Postgres across major releases is *painful*.  We have
>to find a solution to that before it makes sense to speed up the
>major-release cycle.

Well, I think one of the simplest is to do a topological sort of objects in pg_dump (between object classes that need it), AND regression testing for pg_dump :)

One of the most complex would be to avoid the need of pg_dump for upgrades ...


We don't need a simple way, we need a way to create some sort of catalog diff and "a safe" way to apply that to an existing installation during the upgrade.


I think with a shutdown postmaster, a standalone backend used to check that no conflicts exist in any DB, then using the new backend in bootstrap mode to apply the changes, could be an idea to think of. It would still require some downtime, but nobody can avoid that when replacing the postgres binaries anyway, so that's not a real issue. As long as it eliminates dump, initdb, reload it will be acceptable.


Jan


--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== [EMAIL PROTECTED] #


---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to