Hi.

We're moving our EC2 Postgres 1TB DB to RDS and decided to use Bucardo, as
it supports multiple PG versions.
We have a PG 9.2 cluster, and instead of upgrading it, we'll use Bucardo to
replicate to a PG 9.6, then use DMS or pg_dump to restore it in RDS.

Because of the size of the current 9.2 DB, I cannot stop the application so
it doesn't write to the DB while I do pg_dump and restore it in the new
bucardo slave.

So, I thought I would do something like this:

   1. Install Bucardo and add large tables to a pushdelta sync
   2. Copy the tables to the new server (e.g. with pg_dump)
   3. Startup Bucardo and catch things up (e.g. copy all rows changes since
   step 2)

Steps for the above would be pretty much like this article
<https://www.endpoint.com/blog/2009/09/16/migrating-postgres-with-bucardo-4>;
if I'm not mistaken.

My questions are:

   1. Is it the right approach? do you guys have any other suggestions?
   2. I'll have the following: pg-9.2 master --> bucardo instance (with the
   bucardo DB only) --> pg-9.6 slave from bucardo
      1. When doing the pg_dump, I only need to restore it on the "pg-9.6
      slave from bucardo" instance, correct? the bucardo DB does not store the
      data?

Thanks!
_______________________________________________
Bucardo-general mailing list
[email protected]
https://mail.endcrypt.com/mailman/listinfo/bucardo-general

Reply via email to