as said:

use a replication slave dedicated for backups
you can even let a slave write a binlog and
sync another slave with this one

* rsync backups working with diff
* they are extremly fast after the first time
* a dedicated backup-slave has ZERO impact

i am doing rsync-backups of 1.5 TB data over a WAN
link since years each day and the real traffic is
between 2 and 5 GB each day

Am 01.11.2012 16:53, schrieb machiel.richa...@gmail.com:
> Well, the biggest problem we have to answer for the clients is the following:
> 1. Backup method that doesn't take long and don't impact system
> 2. Restore needs to be done on a quick as possible way in order to minimize 
> downtime.
> 
> The one client is running master - master replication with master server in 
> usa, and slave in south africa. They need master backup to be done in the 
> states.
> 
> Sent via my BlackBerry from Vodacom - let your email find you!
> 
> -----Original Message-----
> From: Reindl Harald <h.rei...@thelounge.net>
> Date: Thu, 01 Nov 2012 16:49:45 
> To: mysql@lists.mysql.com<mysql@lists.mysql.com>
> Subject: Re: Mysql backup for large databases
> 
> good luck
> 
> i would call snapshots on a running system much more dumb
> than "innodb_flush_log_at_trx_commit = 2" on systems with
> 100% stable power instead waste IOPS on shared storages
> 
> Am 01.11.2012 16:45, schrieb Singer Wang:
>> Assuming you're not doing dumb stuff like innodb_flush_log_at_tx=0 or 2 and 
>> etc, you should be fine. We have been
>> using the trio: flush tables with read lock, xfs_freeze, snapshot for months 
>> now without any issues. And we test
>> the backups (we load the backup into a staging once a day, and dev once a 
>> week) 
>>
>> On Thu, Nov 1, 2012 at 11:41 AM, Reindl Harald <h.rei...@thelounge.net 
>> <mailto:h.rei...@thelounge.net>> wrote:
>>
>>     > Why do you need downtime?
>>
>>     because mysqld has many buffers in memory and there
>>     is no atomic "flush buffers in daemon and freeze backend FS"
>>
>>     short ago there was a guy on this list which had to realize
>>     this the hard way with a corrupt slave taken from a snapshot
>>
>>     that's why i would ALWAYS do master/slave what means ONE time
>>     down (rsync; stop master; rsync; start master) for a small
>>     timewindow and after that you can stop the slave, take a
>>     100% consistent backup of it's whole datadir and start
>>     the slave again which will do all transactions from the
>>     binarylog happened in the meantime

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to