Hey I know that this is a bit of a vague question, but over a period of days mysqldump will take on one day 2min to complete and on the following day 25min to complete, with the resulting sql file being maybe 200M bigger. The dataset isn't really all that large 14Gb on disk with the dump files being around 7G gzipped down to ~700M. What I'd like to know is how can I check to see what is causing the slowness on any given day, or, if indeed, there is something I can check.
Server info: Server is a Ubuntu 10.04LTS server with a total of 4 CPUs (8 cores) r...@jabba:~# uname -a Linux jabba 2.6.32-26-server x86_64 GNU/Linux r...@jabba:~# free -g total used free shared buffers cached Mem: 47 25 21 0 0 9 -/+ buffers/cache: 15 31 Swap: 75 0 75 Alterations done to my.cnf: net_read_timeout = 600 table_cache = 550 tmp_table_size = 256M max_heap_table_size = 256M query_cache_limit = 40M query_cache_size = 128M innodb_buffer_pool_size = 20G innodb_flush_log_at_trx_commit = 1 innodb_flush_method = O_DIRECT innodb_log_file_size = 128M Output from ls -lth /var/lib/mysql: r...@jabba:~# du -hs /var/lib/mysql/ 14G /var/lib/mysql/ Cronjob is run as follows: # Dump all databases every 6 hours -A (All Databases) -e Extended inserts -q quick -Q Quote names 00 00 * * * /usr/bin/mysqldump -uroot -p****** -A -e -q -Q --master-data=2 > /home/kaluma/sqldumps/all_databases.`/bin/date +\%H`.sql The server is replicated to a single slave that has a similar cronjob defined (bar the master data stuff) which is fairly consistent in how long the dumps take to complete. r...@jumbo:~$ ls -lth sqldumps/ total 2.9G -rw-r--r-- 1 root root 726M 2011-01-12 06:03 all_databases.06.sql.gz -rw-r--r-- 1 root root 723M 2011-01-12 00:03 all_databases.00.sql.gz -rw-r--r-- 1 root root 719M 2011-01-11 18:03 all_databases.18.sql.gz -rw-r--r-- 1 root root 712M 2011-01-11 12:03 all_databases.12.sql.gz Not too sure what other information you'll need, but if you ask I'll try and provide as much data as I can Thanks and kind regards Feighen