Hi all,

I've got a fairly large set of databases I'm backing up each Friday. The dump takes about 12.5h to finish, generating a ~172 GB file. When I try to load it though, *after* manually dumping the old databases, it takes 1.5~2 days to load the same databases. I am guessing this is, at least in part, due to indexing.

My question is; Given an empty target DB and a dump file generated via:

ssh r...@server "mysqldump --all-databases -psecret" > /path/to/backup.sql

How can I go about efficiently loading it into a new database? Specifically, can I disable triggers, indexes and what not until after load finishes? I can only imagine that a single "ok, go create your indexes now" at the end would be faster. Perhaps some way to hold off commits from happening as often? The target server has 32Gb of RAM, so I suspect I should be able to hold things in memory and commit to disk relatively rarely.

I am currently loading via this command:

mysql -psecret < /path/to/backup.sql

The source and destination MySQL versions are:

Source:
mysql Ver 14.13 Distrib 5.1.19-beta, for unknown-linux-gnu (x86_64) using readline 5.0

Dest:
mysql Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (x86_64) using readline 5.1

The reason for the discrepancy is that the old server was setup from source on CentOS 4.x by a previous tech and the destination server is the stock version from CentOS 5.x. The source server will be phased out soon, so no real attempt at maintaining matching versions was done.

Thanks!

Madi

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=arch...@jab.org

Reply via email to