Hi...

Quite often, they create a replica for the purposes of switching to it 
immediately if the master completely fails.   I like the idea of taking backups 
on that replica, and not create another replica just for doing backups.   

One reason why I like it is that if I am to recover a database, I will want to 
do the following: a) restore the database backup; b) apply binary logs starting 
from the exact position as of which that backup is a snapshot of my database.   
If I have to use the binary logs taken on another database (f.e. backups are 
taken on a replica and binary logs on a master), then I will need to match the 
contents of one database binary logs with the time the backup was taken, which 
will make my recovery approximate and error prone. From this, it kind of 
follows that I will want to set log-slave-updates and, for recovery, use 
backups of a replica and the binary logs generated by that replica.   

If the backups are on the third system, and I also have another replica for 
failovers, I will not only use an additional server and storage, but also my 
recovery plan will become messy.  On a failure of a primary, if I will be 
switching to a replica, I will have to make it a master to the replica I am 
taking backups on, which can be done, well, approximately.   If I do not want 
to deal with manually finding a position to synchronize them together, I will 
end up destroying a second replica for backups and rebuilding it from the 
replica I just switched to, for which in turn I will have to take another 
backup.   On a failure of primary in which I will be restoring the database 
from the replica I designated for backups, I will also have to restore the 
replica I use for failover protection.   

All this tells me that it will be substantially simpler if the replica for 
backups and replica for failovers are the same thing.   

Peace
Karen



On 02.11.2012, at 0:55, Manuel Arostegui wrote:

> Hello,
> 
> Just one more suggestion to do full backups in large databases:
> 
> - Dedicated slave (either physical machine, a disk cabinet using iscsi
> connections from a machine just with a bunch of RAM etc)
> - Get the slave delayed a certain time (ie: 1 hour, 2 hours...depends on
> your workload) using Percona Toolkit (
> http://www.percona.com/doc/percona-toolkit/2.1/pt-slave-delay.html)
> - Make sure that slave has (*log*-*slave*-*updates) *in my.cnf.
> - Make sure the master keeps the necessary logs for a point in time
> recovery if needed.
> 
> If something goes wrong you have a full copy of your database, just delayed
> a few hours. If you want to recover from a drop database (or any big fail
> executed from MySQL CLI) in the master, you can just sync up your delayed
> slave to that specific statment - go thru your master binlog, locate the
> "drop database" statement and set START SLAVE SQL_THREAD UNTIL
> master_log_file = 'mysql-bin.XXX', master_log_pos=XXX; to the position
> right before the bad one. That way you'd have your database as it was just
> before the wrong statement execution.
> 
> As Rick said - if you're thinking about doing snapshots, make sure you stop
> your mysql daemon in the delayed slave to make sure everything is committed
> to disk. Otherwise can end up having a corrupted DB which won't boot when
> you need it.
> 
> And lastly, but probably most important....test your backups periodically!!
> 
> Hope this helps
> Manuel.
> 
> 
> 2012/11/1 Karen Abgarian <a...@apple.com>
> 
>> Hi,
>> 
>> For doing backups on the primary database, I know nothing better than have
>> your tables in InnoDB and use Innobackup (or MySQL Enterprise backup).
>> This, however, still has the possibility of hanging as it is using FLUSH
>> TABLES WITH READ LOCK for taking backups of MyISAM tables.    One may want
>> to script it to kill the backup if the wait exceeds some threshold.   The
>> backup taken this way has incremental backups feature which may reduce the
>> impact.
>> 
>> For offloading the backups to a replica, there exist more options because
>> the replica can be frozen and/or shut down.   For an InnoDB database, it
>> has to be shut down for taking a consistent backup.   If it is not, it will
>> result in cute little inconsistencies unless a DBA is one lucky guy and
>> always wins playing roulette.
>> 
>> Combining the two, I like the idea of doing EM backup on a replica and
>> having all tables in InnoDB.
>> 
>> After a backup has been taken, it will eventually need to be restored
>> unless someone just likes taking them.   For this reason, it will have to
>> be brought to the recovered system.   Unless somebody knows in advance when
>> the database would need to be recovered (f.e. it is known that a bad guy
>> always corrupts it on Monday mornings), the backup will need to be
>> available for restore always.   These considerations usually imply things
>> like shared filesystems between primary and replica, rejecting backups for
>> recoveries across datacenters and the like.
>> 
>> Backing up binary logs allows providing continuous coverage for recovery
>> instead of discrete.
>> 
>> Cheers
>> Karen
>> 
>> 
>> On 01.11.2012, at 8:53, machiel.richa...@gmail.com wrote:
>> 
>>> Well, the biggest problem we have to answer for the clients is the
>> following:
>>> 1. Backup method that doesn't take long and don't impact system
>>> 2. Restore needs to be done on a quick as possible way in order to
>> minimize downtime.
>>> 
>>> The one client is running master - master replication with master server
>> in usa, and slave in south africa. They need master backup to be done in
>> the states.
>>> 
>>> 
>>> Sent via my BlackBerry from Vodacom - let your email find you!
>>> 
>>> -----Original Message-----
>>> From: Reindl Harald <h.rei...@thelounge.net>
>>> Date: Thu, 01 Nov 2012 16:49:45
>>> To: mysql@lists.mysql.com<mysql@lists.mysql.com>
>>> Subject: Re: Mysql backup for large databases
>>> 
>>> good luck
>>> 
>>> i would call snapshots on a running system much more dumb
>>> than "innodb_flush_log_at_trx_commit = 2" on systems with
>>> 100% stable power instead waste IOPS on shared storages
>>> 
>>> Am 01.11.2012 16:45, schrieb Singer Wang:
>>>> Assuming you're not doing dumb stuff like innodb_flush_log_at_tx=0 or 2
>> and etc, you should be fine. We have been
>>>> using the trio: flush tables with read lock, xfs_freeze, snapshot for
>> months now without any issues. And we test
>>>> the backups (we load the backup into a staging once a day, and dev once
>> a week)
>>>> 
>>>> On Thu, Nov 1, 2012 at 11:41 AM, Reindl Harald 
>>>> <h.rei...@thelounge.net<mailto:
>> h.rei...@thelounge.net>> wrote:
>>>> 
>>>>> Why do you need downtime?
>>>> 
>>>>   because mysqld has many buffers in memory and there
>>>>   is no atomic "flush buffers in daemon and freeze backend FS"
>>>> 
>>>>   short ago there was a guy on this list which had to realize
>>>>   this the hard way with a corrupt slave taken from a snapshot
>>>> 
>>>>   that's why i would ALWAYS do master/slave what means ONE time
>>>>   down (rsync; stop master; rsync; start master) for a small
>>>>   timewindow and after that you can stop the slave, take a
>>>>   100% consistent backup of it's whole datadir and start
>>>>   the slave again which will do all transactions from the
>>>>   binarylog happened in the meantime
>>> 
>>> 
>>> ‹¢ÒÒ‹¤◊5  vVæW& Â Ö ˆÆˆær Ƙ7@‹¤f÷" Ƙ7B  &6∫˜fW3¢ ΩGG ¢òöƘ7G2æ◊˜7
>> Âæ6öÒö◊˜7 À‹¥Fò Vç7V'67&ˆ&S¢    ΩGG ¢òöƘ7G2æ◊˜7 Âæ6öÒö◊˜7 À‹ ‹
>> 
>> 
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:    http://lists.mysql.com/mysql
>> 
>> 
> 
> 
> -- 
> Manuel Aróstegui
> Systems Team
> tuenti.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql

Reply via email to