Hey Josh, I came in really late on this discussion. It's been my
experience that InnoDB is great until the size of the database/indexes
surpasses the amount of memory you can give to InnoDB for caching.
The performance drop off is pretty quick and dramatic. I've seen this
happen on live tables that performed great one day and then horrible
the next. Although this was on table of about 20 million rows, not 130M.
Based on your table size, you would need to be running a 64-bit system
and 64-bit mysql so you could allocate enough memory to InnoDB. You
don't see the system swapping because InnoDB is working within it's
defined memory allocation limits. Using EXPLAIN on your queries
probably isn't showing you anything helpful because MySQL is using the
proper indexes, but InnoDB can't fit the entire index in memory. My
best guess is that InnoDB is loading part of the index, searching,
loading the next part, searching, etc. Which is why you don't see
consistent high IO or CPU. If you run vmstat 1, that may show you that
IO is occurring, followed by CPU, then back to IO.
For very large tables I stick with MyISAM and use MERGE tables if they
are applicable.
Hope that helps or points you in the right direction.
Brent Baisley
On Sep 4, 2008, at 4:26 PM, Josh Miller wrote:
Good afternoon,
I have recently converted a large table from MyISAM to InnoDB and am
experiencing severe performance issues because of it. HTTP response
times have gone from avg .25 seconds to avg 2-3 seconds. Details
follow:
PHP/MySQL website, no memcached, 3 web nodes that interact with DB,
one that serves images, one master DB that serves all reads/writes,
backup DB that only serves for backup/failover at this time (app
being changed to split reads/writes, not yet).
The one table that I converted is 130M rows, around 10GB data MyISAM
to 22GB InnoDB. There are around 110 tables on the DB total.
My.cnf abbreviated settings:
[mysqld]
port = 3306
socket = /tmp/mysql.sock
skip-locking
key_buffer = 3G
sort_buffer_size = 45M
max_allowed_packet = 16M
table_cache = 2048
tmp_table_size = 512M
max_heap_table_size = 512M
myisam_sort_buffer_size = 512M
myisam_max_sort_file_size = 10G
myisam_repair_threads = 1
thread_cache_size = 300
query_cache_type = 1
query_cache_limit = 1M
query_cache_size = 600M
thread_concurrency = 8
max_connections = 2048
sync_binlog = 1
innodb_buffer_pool_size = 14G
innodb_log_file_size = 20M
innodb_flush_log_at_trx_commit=1
innodb_flush_method = O_DIRECT
skip-innodb-doublewrite
innodb_support_xa = 1
innodb_autoextend_increment = 16
innodb_data_file_path = ibdata1:40G:autoextend
We're seeing a significantly higher percentage of IO wait on the
system, averaging 20% now with the majority of that being user IO.
The system is not swapping at all.
Any ideas for what to check or modify to increase the performance
here and let MyISAM and InnoDB play better together? The plan is to
convert all tables to InnoDB which does not seem like a great idea
at this point, we're considering moving back to MyISAM.
Thanks!
Josh Miller, RHCE
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]