At 03:59 AM 12/17/2009, you wrote:
Madison Kelly wrote:
Hi all,
I've got a fairly large set of databases I'm backing up each Friday. The
dump takes about 12.5h to finish, generating a ~172 GB file. When I try
to load it though, *after* manually dumping the old databases, it takes
1.5~2 days to load the same databases. I am guessing this is, at least in
part, due to indexing.
My question is; Given an empty target DB and a dump file generated via:
ssh r...@server "mysqldump --all-databases -psecret" > /path/to/backup.sql
I use the "-e -v -f -q -Q -K" parameters for the mysqldump on large
tables/databases. It does what you are asking for. Disables the key
generation until all of the data is inserted. It also uses multi insert
statements and not individual insert statement for every row which speeds
up things considerable.
"Load Data ..." is still going to be much faster.
Mike
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=arch...@jab.org