We've got a couple of production databases using mostly MyISAM tables,
that can't be taken offline without bringing down our application.  To
reduce downtime, we run a full mysqldump once a week and back up the
binary logs every day, so we can always use them to "catch up" from
the most recent full dump.  Because we're mostly using MyISAM we can't
make the dump a transaction, so the database is very slow for about 10
minutes while we run the dump.  This will get longer and longer as our
database grows.

We could eliminate the slowness entirely if we ran full dumps off a
replication slave.  We could do it on a slave not in use by the
application at that time.  However, the "master data" in that dump
file would refer to the slave's binary logs, not the real master.
That means we couldn't use that dump to start new replication slaves,
nor to restore a master and catch up (though the latter is less
important since we could could run these dumps more often).

One tactic that seems to work is to stop replication on the slave,
note the master data in "show slave status", run a dump, and keep that
master data alongside that dump.  This is clunky for several reasons;
it's harder to automate the backup, harder to automate the restore,
and error-prone.

Is there any way to do a mysqldump of a replication slave and have
that dumpfile contain the master data for the master server?
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to