I have a server that has several hundred table in a few different databases
comprising almost a gig of data, all running on a rather old  (3.23) version
of mysql. I have used the slow query log to identify queries and have
optimized the queries significantly. At this point the entries in the
slow-query log (with long query time set around at 3 seconds) usually
examine between 1k and 10k rows. When I run the query to test them the query
time is generally under .1 second.

The server is basically running with a my-small.cnf and I think that most of
the rest of  performance I can pull out of the server will come from tuning
the mysql server variables for table cache and temporary table size. I am
concerned that I might have issues with ram usage. With this in mind:
1. Is there some way to use the general query log to test different server
configurations with a real world assortment of queries? Perhaps some way to
use mysqlslap? Any other suggestions for benchmarking tools?
2. I see entries in the slow query log where the number of row examined does
not correlate with EXPLAIN's that I run of the queries on the production
server. Is this likely a situation where mysql needs index hints, or could
something else be in play?

Thank you in advance for any help. RTFM welcomed, just point out what page
;)

-- 
Rob Wultsch

Reply via email to