slave backups master data

2007-07-16 Thread Ofer Inbar
We've got a couple of production databases using mostly MyISAM tables,
that can't be taken offline without bringing down our application.  To
reduce downtime, we run a full mysqldump once a week and back up the
binary logs every day, so we can always use them to catch up from
the most recent full dump.  Because we're mostly using MyISAM we can't
make the dump a transaction, so the database is very slow for about 10
minutes while we run the dump.  This will get longer and longer as our
database grows.

We could eliminate the slowness entirely if we ran full dumps off a
replication slave.  We could do it on a slave not in use by the
application at that time.  However, the master data in that dump
file would refer to the slave's binary logs, not the real master.
That means we couldn't use that dump to start new replication slaves,
nor to restore a master and catch up (though the latter is less
important since we could could run these dumps more often).

One tactic that seems to work is to stop replication on the slave,
note the master data in show slave status, run a dump, and keep that
master data alongside that dump.  This is clunky for several reasons;
it's harder to automate the backup, harder to automate the restore,
and error-prone.

Is there any way to do a mysqldump of a replication slave and have
that dumpfile contain the master data for the master server?
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: slave backups master data

2007-07-16 Thread Eric Bergen

Hi Ofer,

I prefer to use mylvmbackup and lvm to backup mysql. It's snapshot
includes a copy of the relay-log.info file which has the
exec_master_log_pos and relay_master_log_file. These allow you to use
the snapshot to restore another slave. It's also much faster to take a
snapshot of the file system than it is to run mysqldump especially on
large datasets.

-Eric

On 7/16/07, Ofer Inbar [EMAIL PROTECTED] wrote:

We've got a couple of production databases using mostly MyISAM tables,
that can't be taken offline without bringing down our application.  To
reduce downtime, we run a full mysqldump once a week and back up the
binary logs every day, so we can always use them to catch up from
the most recent full dump.  Because we're mostly using MyISAM we can't
make the dump a transaction, so the database is very slow for about 10
minutes while we run the dump.  This will get longer and longer as our
database grows.

We could eliminate the slowness entirely if we ran full dumps off a
replication slave.  We could do it on a slave not in use by the
application at that time.  However, the master data in that dump
file would refer to the slave's binary logs, not the real master.
That means we couldn't use that dump to start new replication slaves,
nor to restore a master and catch up (though the latter is less
important since we could could run these dumps more often).

One tactic that seems to work is to stop replication on the slave,
note the master data in show slave status, run a dump, and keep that
master data alongside that dump.  This is clunky for several reasons;
it's harder to automate the backup, harder to automate the restore,
and error-prone.

Is there any way to do a mysqldump of a replication slave and have
that dumpfile contain the master data for the master server?
  -- Cos

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: slave backups master data

2007-07-16 Thread Baron Schwartz

Hi,

Ofer Inbar wrote:

We've got a couple of production databases using mostly MyISAM tables,
that can't be taken offline without bringing down our application.  To
reduce downtime, we run a full mysqldump once a week and back up the
binary logs every day, so we can always use them to catch up from
the most recent full dump.  Because we're mostly using MyISAM we can't
make the dump a transaction, so the database is very slow for about 10
minutes while we run the dump.  This will get longer and longer as our
database grows.

We could eliminate the slowness entirely if we ran full dumps off a
replication slave.  We could do it on a slave not in use by the


This doesn't address your question directly, but I have gotten in the habit of warning 
people to verify that their slaves do have exactly the same data as the master.  Many 
people assume it does, but when they check, they find out there are differences, 
usually attributable to bugs in replication.  Backing up from slaves is wonderful, but 
it is only a good idea if the slave's data is right :-)


This is why I wrote MySQL Table Checksum, currently as far as I know the only way to 
verify a slave's data: http://mysqltoolkit.sourceforge.net/


cheers
Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]