I would echo what Dan says. In addition, from the slave server, you might look at running the new mysql-parallel-dump tool that Baron Schwartz has developed. It essentially does a dump with a thread running (by default) for each CPU core you have. A dual core box will run two threads and dump roughly twice as fast as a normal mysqldump. In addition, it compresses the output making it much more compact. He has renamed the toolkit to Maatkit and it is available at http://maatkit.sourceforge.net/.

Also, you might look into using an LVM snapshot to run the copy from. That way it doesn't interfere with your operations as much. I do that for some of our production slave servers myself.

Keith

Dan Buettner wrote:
I'd strongly recommend setting up replication, and then taking your backups
from the replica.

mysqlhotcopy works great, I used it for years myself, but it does require
"freezing" your database while the copy happens.  And no matter how you do
it, copying 20 GB takes a little bit of time.

Dan

On Nov 27, 2007 4:35 PM, David Campbell <[EMAIL PROTECTED]> wrote:

Andras Kende wrote:
Hi,

What is the preferred way to backup a 20GB database daily,
without taking offline ?

MySQL 4.1 MyISAM - (will be updated to MySQL 5)

133 table(s)  Sum 115,416,561  latin1_swedish_ci  20.1 GB

Mysqlhotcopy

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]





--
Keith Murphy


editor: MySQL Magazine http://www.mysqlzine.net


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to