Hi, 

For doing backups on the primary database, I know nothing better than have your 
tables in InnoDB and use Innobackup (or MySQL Enterprise backup).   This, 
however, still has the possibility of hanging as it is using FLUSH TABLES WITH 
READ LOCK for taking backups of MyISAM tables.    One may want to script it to 
kill the backup if the wait exceeds some threshold.   The backup taken this way 
has incremental backups feature which may reduce the impact.  

For offloading the backups to a replica, there exist more options because the 
replica can be frozen and/or shut down.   For an InnoDB database, it has to be 
shut down for taking a consistent backup.   If it is not, it will result in 
cute little inconsistencies unless a DBA is one lucky guy and always wins 
playing roulette.   

Combining the two, I like the idea of doing EM backup on a replica and having 
all tables in InnoDB.   

After a backup has been taken, it will eventually need to be restored unless 
someone just likes taking them.   For this reason, it will have to be brought 
to the recovered system.   Unless somebody knows in advance when the database 
would need to be recovered (f.e. it is known that a bad guy always corrupts it 
on Monday mornings), the backup will need to be available for restore always.   
These considerations usually imply things like shared filesystems between 
primary and replica, rejecting backups for recoveries across datacenters and 
the like.   

Backing up binary logs allows providing continuous coverage for recovery 
instead of discrete.   

Cheers
Karen 


On 01.11.2012, at 8:53, machiel.richa...@gmail.com wrote:

> Well, the biggest problem we have to answer for the clients is the following:
> 1. Backup method that doesn't take long and don't impact system
> 2. Restore needs to be done on a quick as possible way in order to minimize 
> downtime.
> 
> The one client is running master - master replication with master server in 
> usa, and slave in south africa. They need master backup to be done in the 
> states.
> 
> 
> Sent via my BlackBerry from Vodacom - let your email find you!
> 
> -----Original Message-----
> From: Reindl Harald <h.rei...@thelounge.net>
> Date: Thu, 01 Nov 2012 16:49:45 
> To: mysql@lists.mysql.com<mysql@lists.mysql.com>
> Subject: Re: Mysql backup for large databases
> 
> good luck
> 
> i would call snapshots on a running system much more dumb
> than "innodb_flush_log_at_trx_commit = 2" on systems with
> 100% stable power instead waste IOPS on shared storages
> 
> Am 01.11.2012 16:45, schrieb Singer Wang:
>> Assuming you're not doing dumb stuff like innodb_flush_log_at_tx=0 or 2 and 
>> etc, you should be fine. We have been
>> using the trio: flush tables with read lock, xfs_freeze, snapshot for months 
>> now without any issues. And we test
>> the backups (we load the backup into a staging once a day, and dev once a 
>> week) 
>> 
>> On Thu, Nov 1, 2012 at 11:41 AM, Reindl Harald <h.rei...@thelounge.net 
>> <mailto:h.rei...@thelounge.net>> wrote:
>> 
>>> Why do you need downtime?
>> 
>>    because mysqld has many buffers in memory and there
>>    is no atomic "flush buffers in daemon and freeze backend FS"
>> 
>>    short ago there was a guy on this list which had to realize
>>    this the hard way with a corrupt slave taken from a snapshot
>> 
>>    that's why i would ALWAYS do master/slave what means ONE time
>>    down (rsync; stop master; rsync; start master) for a small
>>    timewindow and after that you can stop the slave, take a
>>    100% consistent backup of it's whole datadir and start
>>    the slave again which will do all transactions from the
>>    binarylog happened in the meantime
> 
> 
> ‹¢ÒÒ‹¤◊5ÂvVæW&ÂֈƈærƘ7@‹¤f÷"Ƙ7B&6∫˜fW3¢ΩGG¢òöƘ7G2æ◊˜7Âæ6öÒö◊˜7À‹¥FòVç7V'67&ˆ&S¢ΩGG¢òöƘ7G2æ◊˜7Âæ6öÒö◊˜7À‹
>  ‹


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql

Reply via email to