Hi,
I am trying to benchmark mysql's performance for fetching a random record.
To do so, I have setup the following configuration: Linux, 14 2G tables,
each populated with 65000 records of fixed 32k size, type mediumblob. My
test randomly generates a table/id pair and selects the record. I then
compare this to a different script that randomly generates a table/offset
pair, seeks to the offset and reads 32k worth of data directly from the
table. I have run my tests with a warm mysql key cache (mysqladmin ext
shows all key blocks in the cache). Between each test, I do a 1G read of a
different file
to /dev/null in order to clear the system's buffer cache.
When I compare the number of blocks read from the disk (via vmstat/iostat),
that number is about 1.5 to twice as much for the mysqld test vs. the random
read test. Does anyone have an idea of why this is the case? Why would
mysqld read in almost double the amount of data?
When I compare the number of completed 32k read request, random read number
is about 1.5 to twice the mysql number. So it seems that mysqld reads in a
lot more blocks, but not they are not all what the client has requested.
Any ideas?
Thanks in advance.
--bijan
---------------------------------------------------------------------
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
http://lists.mysql.com/ (the list archive)
To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php