Aaron Blew wrote:
Here are a couple ideas:
* Decrease innodb_autoextend_increment to 8 or even 4.  You may see
additional IO wait because you're pre-allocating space in chunks
disproportinate to what you immediately need, causing bursty performance.
* If your remaining MyISAM tables don't need it, take 2GB of the key_buffer
alocation and put it towards the innodb buffer pool

What are the system's specs?  What's it's underlying storage?  What flags
were used when you created the filesystem(s)?  What OS/Version of MySQL are
you running?  Could you send us some iostat output?

Thanks for all of your suggestions -- we've switched back to MyISAM until we can test this better.

* increasing the innodb_buffer_pool had no apparent effect on performance.
* System is a Dell PE2950 4 core, 32GB RAM, RAID-10 local disks.
* File system is plain ext3, 'mke2fs -j'
* Running RHEL 4.4, MySQL 5.0.66a-enterprise (open ticket with MySQL, working all angles here).
* iostat output sample (iostat -x 5):

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.11    0.00    3.37   23.44    0.00   68.08

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             294.00       328.00      2560.00        656       5120
dm-0            359.50       328.00      2560.00        656       5120
dm-1              0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          14.27    0.00    6.63   22.28    0.00   56.82

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             360.70        55.72      4815.92        112       9680
dm-0            456.22        55.72      4815.92        112       9680
dm-1              0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          34.08    0.00   23.60   15.86    0.00   26.47

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             348.00       316.00      3304.00        632       6608
dm-0            446.00       316.00      3304.00        632       6608
dm-1              0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          29.59    0.00   27.84   15.23    0.00   27.34

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             475.00       152.00      4284.00        304       8568
dm-0            554.50       152.00      4284.00        304       8568
dm-1              0.00         0.00         0.00          0          0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          23.28    0.00   15.77   18.15    0.00   42.80

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             312.50       192.00      3044.00        384       6088
dm-0            401.50       192.00      3044.00        384       6088
dm-1              0.00         0.00         0.00          0          0



Interesting note: when I switched to using the myisam version of the table with the old configuration, we still had very poor performance with significant CPU IO wait as you can see from the above iostat. This was without any load on the InnoDB table at all. Once I restarted with the new settings, the load and performance recovered immediately. You can see from this iostat output where the restart occurred:

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 61.68 260.88 610.58 2158.88 5384.43 8.66 78.96 90.60 1.05 91.52 dm-0 0.00 0.00 261.08 673.05 2158.88 5384.43 8.08 85.99 92.06 0.98 91.54 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.50    0.00    2.05   19.45    0.00   75.00

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 30.40 373.20 251.00 3340.80 2251.20 8.96 31.07 49.77 1.13 70.64 dm-0 0.00 0.00 373.00 281.40 3340.80 2251.20 8.55 33.85 51.72 1.08 70.72 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           4.15    0.00    2.00    2.40    0.00   91.45

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 8.40 20.60 54.20 206.40 500.80 9.45 5.30 70.80 1.37 10.28 dm-0 0.00 0.00 20.60 62.60 206.40 500.80 8.50 5.57 66.90 1.23 10.26 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.75    0.00    0.60    2.25    0.00   96.40

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 7.20 13.20 89.20 107.20 771.20 8.58 6.88 67.16 1.04 10.66 dm-0 0.00 0.00 13.20 96.40 107.20 771.20 8.01 7.15 65.24 0.97 10.68 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.65    0.00    0.70    2.15    0.00   96.50


The settings changed are:

$ diff -u my.cnf.pre  my.cnf.post
--- my.cnf.pre  2008-09-05 01:07:08.000000000 -0700
+++ my.cnf.post 2008-09-05 01:05:23.000000000 -0700
@@ -35,15 +35,17 @@

 log-bin                        = mysql-bin
 server-id              = 1
-sync_binlog            = 1

-innodb_buffer_pool_size        = 10G
-innodb_log_file_size   = 500M
-innodb_flush_log_at_trx_commit=0
-innodb_flush_method    = O_DIRECT
-skip-innodb-doublewrite
-innodb_support_xa      = 1
-innodb_autoextend_increment = 16
+# set to 1 after conversion - makes sure writes to binlog are synced to disk
+#sync_binlog           = 1
+
+innodb_buffer_pool_size        = 4G
+innodb_log_file_size   = 20M
+#innodb_flush_log_at_trx_commit=0
+#innodb_flush_method   = O_DIRECT
+#skip-innodb-doublewrite
+#innodb_support_xa     = 1
+innodb_autoextend_increment = 4
 innodb_data_file_path  = ibdata1:40G:autoextend

 [mysqldump]

So, we're going to take a break from the InnoDB stuff while I recover the slave and try again next week or so.

Thanks!
Josh Miller, RHCE

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to