We have a large database that consists of 1 table with 400 fields. I know, the first thing you are thinking is normalize, it's not an option right now and the database gets reloaded every 3 months. The table contains marketing data so it is not as easy to normalize as an employee database, or helpdesk database for example.
We want to first focus on trying to improve our indexing performance on this large table. The database (once fully populated) will have 130 million rows. We performed testing using only 13.5 million and had the following results:
Indexes Time (mins) Index1 78 Index2 77 Index3 78 Index4 106 Index5 135 Index6 165 Index7 192
You can see that the first 3 indexes seemed to be created in equal amounts of time but after that, the indexes seemed to add 30mins to each one (almost to the minute).
I believe cardinality would affect indexing but what are the odds that our cardinality would create such an accurate graph (graph the points and see).
You can see by the times above that this is unnacceptable. Creating the same indexes on a MS SQL takes much less time and each index (aside from the initial cluster index) takes the same amount of time.
Thanks.
Chris.
Chris,
Try creating all indexes with a single Alter Table statement. It should be considerably faster. :)
Mike
-- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]