Re: max_seeks_for_key in InnoDB
Thanks for your reply. I guess the root of my problem is that MySQL is making a poor choice of index, which I presume is based on the cardinality numbers of each index. When I run ANALYZE TABLE, these values can fluctuate wildly -- between 16 and 26,000, for example. According to the manual, ANALYZE TABLE counts cardinality by doing 10 random dives into each of the index trees. (this is up from 8 random dives in 4.1.11, which is good!) Maybe the accuracy of this measurement decreases as tables reach millions of rows? I would really like to avoid rewriting all of my queries to add USE INDEX and STRAIGHT JOIN, since some of them are quite complicated and I would prefer to leave the job to MySQL. My questions are: 1) Can the 10 random dives be made configurable? I would like to do an analyze table with 100 random dives if it would produce a more accurate count. 2) Is there some reason that my index trees would not be uniform? Is there anything I can do about this? 3) For InnoDB tables, does it make sense to always keep max_seeks_for_key at a low value (1, 100, ?) Many thanks in advance, -Bob - Original Message - From: [EMAIL PROTECTED] To: Bob O'Neill [EMAIL PROTECTED] Cc: mysql@lists.mysql.com Sent: Friday, June 03, 2005 3:20 PM Subject: Re: max_seeks_for_key in InnoDB Hi, you can use a hint to force specific index usage : http://lib.hutech.edu.vn/ebookonline/ebook1/0596003064/hpmysql-CHP-5-SECT-4.html But this is not a good idea since data change and index selectivity can become bad. Also, if the index scan + the table scan is bigger than a full table scan, even you will prefer FTS. So, according to selectivity, usage of an index can be a very bad idea. Thsi depends on how many rows your query retreives among the count(*) of the table. Mathias Selon Bob O'Neill [EMAIL PROTECTED]: I am having problems with MySQL inconsistently choosing the wrong index, or no index at all, for queries on tables with 20 million rows. Would it be a good idea for me to set max_seeks_for_key to 1 (or something less than 4 billion), in order to force MySQL to use an index? We are using InnoDB. Since InnoDB has clustered indexes, is there ever a good reason for MySQL to prefer a table scan? Thanks, -Bob -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
max_seeks_for_key in InnoDB
I am having problems with MySQL inconsistently choosing the wrong index, or no index at all, for queries on tables with 20 million rows. Would it be a good idea for me to set max_seeks_for_key to 1 (or something less than 4 billion), in order to force MySQL to use an index? We are using InnoDB. Since InnoDB has clustered indexes, is there ever a good reason for MySQL to prefer a table scan? Thanks, -Bob
lock tables and sql cache
If I try to read table 'b' after locking table 'a', I expect to get the error message Table 'b' was not locked with LOCK TABLES. However, if my query that accesses table b is stored in the query cache, I don't get the error. This causes a problem in the following scenario: User 1: LOCK TABLES a SELECT SQL_CACHE COUNT(*) FROM b (assume it was already cached) User 2: INSERT b VALUES('value'); SELECT SQL_CACHE COUNT(*) FROM b (the SELECT puts the query back into the cache) User 1: SELECT SQL_CACHE COUNT(*) FROM b (now he gets a different result) UNLOCK TABLES User 1 thinks that everything he's doing is safe inside of an emulated transaction. But the data in table b has changed between the LOCK and the UNLOCK, and User 1 isn't notified that he is doing anything wrong. I think an appropriate fix would be to force User 1 to lock table b even though the results of that query are stored in the query cache. Is this possible? Thanks, -Bob -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Slow queries only the first time
Hello. I am wondering why some of my queries are slow on the first run, but speedy on subsequent runs. They are not being query cached, as I have query_cache_type set to DEMAND. Is it something as simple as pulling the data into RAM from disk, or is there something else going on? Here's a simple example: mysql select count(*) from foo; +--+ | count(*) | +--+ | 1374817 | +--+ 1 row in set (3.60 sec) mysql select count(*) from foo; +--+ | count(*) | +--+ | 1374817 | +--+ 1 row in set (0.92 sec) mysql show variables like 'query_cache_type'; +--++ | Variable_name| Value | +--++ | query_cache_type | DEMAND | +--++ 1 row in set (0.00 sec) I am running MySQL 4.1.10 with InnoDB on RHEL 2.1 (kernel 2.4.9-e.49enterprise). Binaries are Linux x86 glibc static gcc RPMs from mysql.com. Thanks, -Bob -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]