Is there any upcoming fix for this recurring problem? The table handler is just giving poor data to the optimizer and the optimizer is making bad decisions because of it. It appears to come and go, depending on data that is in the table, what's been done, etc.
To give one example, with one of our queries that does a good deal of joining (roughly 10 tables), an optimized version needs to sift through approximately 6500 rows. The unoptimized version needs to sift through 8600 rows, a 32% increase that results (in our case) to a 20% increase in CPU usage on a dual CPU system. Right now, the tables are small, but we want them to get bigger, and the unoptimized version scales *much* worse than the optimized version. Furthermore, I thought sticking more data into the tables might eliminate the problem as the two plans data dispersals grew further apart, but it looks like that isn't the case. Sometimes, converting the tables to MyISAM (where the optimization *always* works) and then back to InnoDB fixes it, but obviously, that's not something you want to do on a running system. Is there any headway being made into this problem? I think I first reported it back around .41 or .42. This isn't really something a bug report can be filed on, because it seems to be the result of a varying data group and InnoDB's corresponding analysis, but if there's some bit of data that will help short of the data in my database, I'll gladly pass it on. * Philip Molter * Texas.net Internet * http://www.texas.net/ * [EMAIL PROTECTED] --------------------------------------------------------------------- Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail <[EMAIL PROTECTED]> To unsubscribe, e-mail <[EMAIL PROTECTED]> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php