Russell Uman wrote:
Baron Schwartz wrote:
I don't think it will be any better to count distinct values. I think
the query is just slow because the index lookups are slow. Is the
'word' column really 150 bytes?
huh. it's a varchar(50) on table1 and a varchar(50) on table2. i wonder
why explain is reporting 150 as key_len?
utf8?
That's probably the culprit. How slow is this, by the way?
this is also interesting. as you can see in the slow query log reported
before, it took 94 seconds. i'd say i see between 15 and 90 seconds in
the slow query log for this normally.
however, i just ran the query now, at a time when the application is not
heavily loaded, and it finished quickly - less than a second.
another run a few minutes later took around 3 seconds. so there seems to
be some interaction with load.
370k rows in one table, verifying the non-existence of index records
in a 4M-row table with 150-byte index values... what does "slow" mean
for your application? How big is the index for the 4M-row table (use
SHOW TABLE STATUS)?
the larger table has 95M index. the smaller has a 5M index. key_buffer
is set to 2G, and when i look at top mysql never actually get's above
1.5G, so i'm under the impression that all the indexes are in memory.
it's a search table, so it does get a lot of inserts, but slow log never
reports any lock time.
is there anything else i can investgate?
Do you need utf8? :-)
Check your cache hits. I can't remember if you said, but is it an
InnoDB table? I'm guessing MyISAM since you have a 2G key buffer.
Check key_read_requests and key_reads for the query
(mysql-query-profiler is a handy way to do this).
Baron
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]