In the last episode (Jul 27), Mike Spreitzer said:
> Sure, `wc` is different from mysql --- but different enough to account for
> a 16000:75 ratio?

Most definitely.  wc is reading sequentially, and the OS is probably
coalescing those reads and prefetching disk blocks in 128KB chunks. 
160000*128 is around 2GB/sec (iostat would tell you your actual throughput). 
You probably either have a 2gb fibre-channel card, or else wc is CPU-bound
at this point, counting each character as it streams past.  I bet "dd
if=largefile of=/dev/null bs=8k" would give you even more iops.  "dd ... 
bs=1m" would probably max out your fibre-channel card's bandwidth.  None of
those commands are doing random I/Os, though, so you can't compare their
numbers to your mysql query.
 
> Will iostat give a good utilization metric for GPFS?

For your particular query, yes.  You're doing single-threaded random IO, so
you are fetching a random disk block, waiting for the result, then fetching
another random block, etc.  100% of your time should be in iowait, waiting
for a disk head to seek to your data.  If it's not at least 80%, then your
query isn't waiting on disk I/O, and since you aren't CPU-bound, I'm not
sure what your bottleneck would be at that point...
 
> If I want to try to actually hold a 2GB table in RAM, is there anything I 
> need to set in my.cnf to enable that?

Just make sure your key_buffer_size is large enough to hold the index.  You
can find this number by setting key_buffer_size to a huge number (32GB for
example), running "LOAD INDEX INTO CACHE" for your index, then running "show
status like 'key_blocks_used'; ".

http://dev.mysql.com/doc/refman/5.1/en/load-index.html

-- 
        Dan Nelson
        dnel...@allantgroup.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=arch...@jab.org

Reply via email to