On Wed, Jan 28, 2015 at 09:26:11PM -0800, Josh Berkus wrote:
>
> > So, for my 2c, I'm on the fence about it. On the one hand, I agree,
> > it's a bit of a complex process to get right. On the other hand, it's
> > far better if we put something out there along the lines of "if you
> > really want to, this is how to do it" than having folks try to fumble
> > through to find the correct steps themselves.
>
> So, here's the correct steps for Bruce, because his current doc does not
> cover all of these. I really think this should go in as a numbered set
> of steps; the current doc has some steps as steps, and other stuff
> buried in paragraphs.
>
> 1. Install the new version binaries on both servers, alongside the old
> version.
>
> 2. If not done by the package install, initdb the new version's data
> directory.
>
> 3. Check that the replica is not very lagged. If it is, wait for
> traffic to die down and for it to catch up.
Now that 9.4.1 is released, I would like to get this doc patch applied
--- it will close the often-requested feature of how to pg_upgrade slave
clusters.
I wasn't happy with Josh's specification above that the "replica is not
very lagged", so I added a bullet point to check the pg_controldata
output to verify that the primary and standby servers are synchronized.
Yes, this adds even more complication to the pg_upgrade instructions,
but it is really more of the same complexity. pg_upgrade really needs
an install-aware and OS-aware tool on top of it to automate much of
this.
Patch attached.
--
Bruce Momjian <[email protected]> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ Everyone has their own god. +
diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml
new file mode 100644
index 07ca0dc..e25e0d0
*** a/doc/src/sgml/backup.sgml
--- b/doc/src/sgml/backup.sgml
*************** tar -cf backup.tar /usr/local/pgsql/data
*** 438,445 ****
Another option is to use <application>rsync</> to perform a file
system backup. This is done by first running <application>rsync</>
while the database server is running, then shutting down the database
! server just long enough to do a second <application>rsync</>. The
! second <application>rsync</> will be much quicker than the first,
because it has relatively little data to transfer, and the end result
will be consistent because the server was down. This method
allows a file system backup to be performed with minimal downtime.
--- 438,447 ----
Another option is to use <application>rsync</> to perform a file
system backup. This is done by first running <application>rsync</>
while the database server is running, then shutting down the database
! server long enough to do an <command>rsync --checksum</>.
! (<option>--checksum</> is necessary because <command>rsync</> only
! has file modification-time granularity of one second.) The
! second <application>rsync</> will be quicker than the first,
because it has relatively little data to transfer, and the end result
will be consistent because the server was down. This method
allows a file system backup to be performed with minimal downtime.
diff --git a/doc/src/sgml/pgupgrade.sgml b/doc/src/sgml/pgupgrade.sgml
new file mode 100644
index e1cd260..d1c26df
*** a/doc/src/sgml/pgupgrade.sgml
--- b/doc/src/sgml/pgupgrade.sgml
*************** NET STOP postgresql-8.4
*** 315,320 ****
--- 315,324 ----
NET STOP postgresql-9.0
</programlisting>
</para>
+
+ <para>
+ Log-shipping standby servers can remain running until a later step.
+ </para>
</step>
<step>
*************** pg_upgrade.exe
*** 399,404 ****
--- 403,525 ----
</step>
<step>
+ <title>Upgrade any Log-Shipping Standby Servers</title>
+
+ <para>
+ If you have Log-Shipping Standby Servers (<xref
+ linkend="warm-standby">), follow these steps to upgrade them (before
+ starting any servers):
+ </para>
+
+ <procedure>
+
+ <step>
+ <title>Install the new PostgreSQL binaries on standby servers</title>
+
+ <para>
+ Make sure the new binaries and support files are installed on all
+ standby servers.
+ </para>
+ </step>
+
+ <step>
+ <title>Make sure the new standby data directories do <emphasis>not</>
+ exist</title>
+
+ <para>
+ Make sure the new standby data directories do <emphasis>not</>
+ exist or are empty. If <application>initdb</> was run, delete
+ the standby server data directories.
+ </para>
+ </step>
+
+ <step>
+ <title>Install custom shared object files</title>
+
+ <para>
+ Install the same custom shared object files on the new standbys
+ that you installed in the new master cluster.
+ </para>
+ </step>
+
+ <step>
+ <title>Stop standby servers</title>
+
+ <para>
+ If the standby servers are still running, stop them now using the
+ above instructions.
+ </para>
+ </step>
+
+ <step>
+ <title>Verify Standby Servers</title>
+
+ <para>
+ To prevent old standby servers from being modified, run
+ <application>pg_controldata</> against the primary and standby
+ clusters and verify that the <quote>Latest checkpoint location</>
+ values match in all clusters. (This requires the standbys to be
+ shut down after the primary.)
+ </para>
+ </step>
+
+ <step>
+ <title>Save configuration files</title>
+
+ <para>
+ Save any configuration files from the standbys you need to keep,
+ e.g. <filename>postgresql.conf</>, <literal>recovery.conf</>,
+ as these will be overwritten or removed in the next step.
+ </para>
+ </step>
+
+ <step>
+ <title>Run <application>rsync</></title>
+
+ <para>
+ From a directory that is above the old and new database cluster
+ directories, run this for each slave:
+
+ <programlisting>
+ rsync --archive --hard-links --size-only old_pgdata new_pgdata remote_dir
+ </programlisting>
+
+ where <option>old_pgdata</> and <option>new_pgdata</> are relative
+ to the current directory, and <option>remote_dir</> is
+ <emphasis>above</> the old and new cluster directories on
+ the standby server. The old and new relative cluster paths
+ must match on the master and standby server. Consult the
+ <application>rsync</> manual page for details on specifying the
+ remote directory, e.g. <literal>standbyhost:/opt/PostgreSQL/</>.
+ <application>rsync</> will be fast when <application>pg_upgrade</>'s
+ <option>--link</> mode is used because it will create hard links
+ on the remote server rather than transferring user data.
+ </para>
+
+ <para>
+ If you have tablespaces, you will need to run a similar
+ <application>rsync</> command for each tablespace directory. If you
+ have relocated <filename>pg_xlog</> outside the data directories,
+ <application>rsync</> must be run on those directories too.
+ </para>
+ </step>
+
+ <step>
+ <title>Configure log-shipping to standby servers</title>
+
+ <para>
+ Configure the servers for log shipping. (You do not need to run
+ <function>pg_start_backup()</> and <function>pg_stop_backup()</>
+ or take a file system backup as the slaves are still synchronized
+ with the master.)
+ </para>
+ </step>
+
+ </procedure>
+
+ </step>
+
+ <step>
<title>Restore <filename>pg_hba.conf</></title>
<para>
*************** pg_upgrade.exe
*** 409,414 ****
--- 530,544 ----
</step>
<step>
+ <title>Start the new server</title>
+
+ <para>
+ The new server can now be safely started, and then any
+ <application>rsync</>'ed standby servers.
+ </para>
+ </step>
+
+ <step>
<title>Post-Upgrade processing</title>
<para>
*************** psql --username postgres --file script.s
*** 548,569 ****
</para>
<para>
- A Log-Shipping Standby Server (<xref linkend="warm-standby">) cannot
- be upgraded because the server must allow writes. The simplest way
- is to upgrade the primary and use <command>rsync</> to rebuild the
- standbys. You can run <command>rsync</> while the primary is down,
- or as part of a base backup (<xref linkend="backup-base-backup">)
- which overwrites the old standby cluster.
- </para>
-
- <para>
If you want to use link mode and you do not want your old cluster
to be modified when the new cluster is started, make a copy of the
old cluster and upgrade that in link mode. To make a valid copy
of the old cluster, use <command>rsync</> to create a dirty
copy of the old cluster while the server is running, then shut down
! the old server and run <command>rsync</> again to update the copy with any
! changes to make it consistent. You might want to exclude some
files, e.g. <filename>postmaster.pid</>, as documented in <xref
linkend="backup-lowlevel-base-backup">. If your file system supports
file system snapshots or copy-on-write file copies, you can use that
--- 678,692 ----
</para>
<para>
If you want to use link mode and you do not want your old cluster
to be modified when the new cluster is started, make a copy of the
old cluster and upgrade that in link mode. To make a valid copy
of the old cluster, use <command>rsync</> to create a dirty
copy of the old cluster while the server is running, then shut down
! the old server and run <command>rsync --checksum</> again to update the
! copy with any changes to make it consistent. (<option>--checksum</>
! is necessary because <command>rsync</> only has file modification-time
! granularity of one second.) You might want to exclude some
files, e.g. <filename>postmaster.pid</>, as documented in <xref
linkend="backup-lowlevel-base-backup">. If your file system supports
file system snapshots or copy-on-write file copies, you can use that
--
Sent via pgsql-hackers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers