Madison Kelly wrote:
Hi all,
I've got a fairly large set of databases I'm backing up each Friday. The
dump takes about 12.5h to finish, generating a ~172 GB file. When I try
to load it though, *after* manually dumping the old databases, it takes
1.5~2 days to load the same databases. I am guessing this is, at least
in part, due to indexing.
My question is; Given an empty target DB and a dump file generated via:
ssh r...@server "mysqldump --all-databases -psecret" > /path/to/backup.sql
How can I go about efficiently loading it into a new database?
Specifically, can I disable triggers, indexes and what not until after
load finishes? I can only imagine that a single "ok, go create your
indexes now" at the end would be faster. Perhaps some way to hold off
commits from happening as often? The target server has 32Gb of RAM, so I
suspect I should be able to hold things in memory and commit to disk
relatively rarely.
I am currently loading via this command:
mysql -psecret < /path/to/backup.sql
For that kind of dump, that kind of restore is what you get. Your
current dump is generating GB of INSERT statements that need to be
parsed then processed.
To get a faster restore, use a different sort of dump. I suggest you
compare your current process to one that uses the --tab option of
mysqldump (to save the data) then uses LOAD DATA INFILE ... to
repopulate your server. This is the fastest known method to populate a
table other than a direct file copy.
--
Shawn Green, MySQL Senior Support Engineer
Sun Microsystems, Inc.
Office: Blountville, TN
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=arch...@jab.org