I routinely see people requesting information on how to efficiently
handle *large* tables in mysql. Hopefully, the following will be helpful to those individuals.
I have several tables that are 500M to 4B rows in size and require me to drop and re-create indices at regular intervals. Usually, I'm removing about 10% of the existing records and then adding about 15% new records back in. Doing this with indices turned on would take days if not weeks. With the proper sequence of operations, however, I can accomplish this in about 24 hours.
I'm assuming you're using MyISAM tables and not InnoDb.
Have you tried using "Alter Table ... Disable Keys" to disable the non-unique indexes before you do your deletes and inserts?
When possible, we load data using mysqlimport. This is much faster than connecting to the server and executing a series of INSERT statements.
You can get similar speeds using "Load Data"
I have to ask, why are you deleting 10% of your rows and adding another 15% back in? Are you removing old data, data that is say 6 months old, and then importing this months data? If so, can't you just store one months data in separate tables and use Merge Table for your queries? At the end of each month, just drop the oldest table and create a new table for the current month's data.
Mike
-- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]