>-----Original Message-----
>From: Claudio Nanni [mailto:claudio.na...@gmail.com]
>Sent: Wednesday, April 21, 2010 2:12 AM
>Cc: mysql@lists.mysql.com
>Subject: Re: better way to backup 50 Gig db?
>

[JS] <snip>

[JS] Unless I've forgotten something from earlier in my career (what day is 
it, anyways?), there are three aspects to this problem:

1. Ensuring that your databases, slave and master individually, are internally 
consistent;
2. Ensuring that your master has captured the latest externally-supplied data; 
and
3. Ensuring that your slave and you master are totally in synch.

#1 is the proper goal for the master. That's the whole point of ACID. For the 
master database, #2 is unattainable. You can buffer as many times and as many 
ways and as many places as you like, there is always going to be the 
**possibility** that some incoming data will be lost. Even if you push the 
problem all the way back to a human user, it will still be possible to lose 
data. If something is possible, it will happen: perhaps not for millennia, but 
more likely as soon as you leave on vacation.

Similarly, #1 is an attainable and necessary goal for a slave; and #2 is just 
as unattainable for a slave as for a master. The only way to guarantee #3 is 
to include the replication somewhere in the ACID transaction. The penalty for 
that is going to be a loss of throughput, possibly a horrendous loss of 
throughput. That is where somebody needs to do a cost/benefit analysis.

>Just my two cents
>
[JS] ... and mine ...

>Claudio
>
>
>Gavin Towey wrote:
>
>You can make binary backups from the master using filesystem snapshots.  You
>only need to hold a global read lock for a split second.
>
>Regards,
>Gavin Towey
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:    http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=arch...@jab.org

Reply via email to