Re: Foreign Keys in CREATE TABLEs produced by mysqldump
Heikki Tuuri [EMAIL PROTECTED] wrote on 12/24/2003 01:53:07 PM: Michael, this is the question where the valid answer is: upgrade! In which version was this corrected? I'm hesitant to upgrade to the latest and greatest without it being banged on by the masses first.. I have half a dozen or so business applications running w/ this mysql data store, regression testing them would involve an amount of time I can not afford to spend. The 3.23 branch works well for us right now and I have no real need to use 4.0 (feature-wise.) So until 3.23.x is no longer supported or there is a dire need to use 4.x functionality... I can't make a business case justifying the regression test work. MySQL/InnoDB-3.23.57, June 20, 2003 Changed the default value of innodb_flush_log_at_trx_commit from 0 to 1. If you have not specified it explicitly in your my.cnf, and your application runs much slower with this new release, it is because the value 1 causes a log flush to disk at each transaction commit. Thats it! Thanks. -mike
Foreign Keys in CREATE TABLEs produced by mysqldump
First let me state that this is not a question where a valid answer is to SET FOREIGN_KEY_CHECKS=0... (no, its not *that* question..) I have noticed that mysqldump includes the database name in foreign key specifications within CREATE TABLEs. This is causing a bit of grief as I would like to reimport such a dumped data set with a *different* database name (multiple instances of a data set being created for development, qa, and ua purposes..) Is there any way to modify this behaviour? I would prefer not to have to modify a mysqldump'ed file to have it reimported (at all!) In addition, is there a known bug with 3.23.58 where reading bulk queries is very slow? (i.e., mysql FOODB foodb.dump.) Ever since upgrading (on FreeBSD 5.2-RC1) this is horrendously slow. Almost NO cpu time is consumed by any process involved, and both server/client processes are often in state 'S' (sleeping for less than 20 seconds.) This also occurs when using a client running on a linux machine to feed data to the aforementioned mysqld on the FreeBSD host. I'm about to break up my dumps into table-specific files so that I can use LOAD DATA INFILE to help work around this problem. -mike
Strange performance problem importing dumped data (3.23.58 on FreeBSD 5.2-RC1)
I am using 3.23.58 (server and client) on FreeBSD 5.2-RC1 and all of what I am about to describe is being performed locally. When importing dumped data (with something like mysql dbname dbname.dump) the import is being performed at ~60 queries a second. This is MUCH slower than when I was running 3.23.51 on FreeBSD 4.x-STABLE. My data set is approx 200k rows, so this isn't really acceptable for a ~45MB import. I have the following information, all captured during the import process: Storage device is pegged making a large number of small transfers (iostat output): tty ipsd0 fd0 cpu tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id 1 110 17.41 2 0.03 0.00 0 0.00 0 0 0 0 99 0 36 15.91 108 1.68 0.00 0 0.00 3 0 8 0 88 0 12 16.00 90 1.41 0.00 0 0.00 1 0 7 1 91 0 12 16.00 96 1.50 0.00 0 0.00 2 0 9 0 88 0 12 16.00 84 1.31 0.00 0 0.00 3 0 5 0 91 mysqld process call frequency: 189549517 mysqld CALL gettimeofday 92175517 mysqld CALL read 46255517 mysqld CALL pwrite 46171517 mysqld CALL fsync 46067517 mysqld CALL write 5567517 mysqld CALL poll 4857517 mysqld CALL sigreturn 4455517 mysqld CALL sigprocmask 229517 mysqld CALL fcntl 106517 mysqld CALL clock_gettime 35517 mysqld CALL setitimer 13517 mysqld CALL lseek 10517 mysqld CALL open 10517 mysqld CALL access 9517 mysqld CALL fstat 9517 mysqld CALL close 8517 mysqld CALL pread 7517 mysqld CALL setsockopt 7517 mysqld CALL getsockname 7517 mysqld CALL accept 6517 mysqld CALL shutdown This is over a span of approx 6 minutes. (whats up with the gettimeofday calls...??) And lastly here is vmstat output for 30 seconds during the same import: procs memory pagedisks faults cpu r b w avmfre flt re pi po fr sr ip0 fd0 in sy cs us sy id 1 0 0 195588 411848 0 0 0 9 1 0 0 3370 291 1 1 99 0 1 0 195588 407521 0 0 0 2 0 82 0 4170 1458 3 7 90 0 1 0 195588 402240 0 0 0 0 0 83 0 4170 1505 3 6 91 0 1 0 195588 396960 0 0 0 0 0 94 0 4260 1654 3 7 89 0 1 0 195588 391040 0 0 0 0 0 83 0 4170 1477 3 7 90 0 1 0 195588 385920 0 0 0 1 0 95 0 4280 1671 4 9 88 My mysqld configuration is as follows: [mysqld] innodb_data_file_path = ibdata1:24M:autoextend set-variable = innodb_log_file_size=8M set-variable = innodb_log_buffer_size=4M set-variable = innodb_buffer_pool_size=8M set-variable = innodb_additional_mem_pool_size=2M set-variable = table_cache=128 set-variable = sort_buffer=8M set-variable = record_buffer=2M set-variable = key_buffer_size=32M set-variable = wait_timeout=60 My ibdata1 file is approx 180MB right now. It looks to me as though the mysqld process is blocking like crazy on I/O.. but I have not noticed poor performance with any other processing on the same system.. All filesystem based tests I can come up with (creation of many small files, large files, etc) pass with flying colours. Note that the throughput on the ipsd0 device (tranfer rate shown in iostat output listed above) is FAR greater than the amount of data that is (should be?) being read/written related to mysqld (nothing else is running on the host at this time.) Could this possibly be an ips driver bug in FreeBSD 5.2 that mysqld is somehow exploiting? Is there something amiss with the above call frequencies? Am I ignorant of a known issue? Is there some buffer-related configuration for mysqld that I might need to (re)specify? -mike
Re: Strange performance problem importing dumped data (3.23.58 on FreeBSD 5.2-RC1)
I have found that by dumping with --extended-insert, the subsequent import is MUCH faster. Of course it only issues 450 odd queries for the data set instead of ~200k... -mike [EMAIL PROTECTED] 12/23/03 05:02 PM To [EMAIL PROTECTED] cc Subject Strange performance problem importing dumped data (3.23.58 on FreeBSD 5.2-RC1) I am using 3.23.58 (server and client) on FreeBSD 5.2-RC1 and all of what I am about to describe is being performed locally. When importing dumped data (with something like mysql dbname dbname.dump) the import is being performed at ~60 queries a second. This is MUCH slower than when I was running 3.23.51 on FreeBSD 4.x-STABLE. My data set is approx 200k rows, so this isn't really acceptable for a ~45MB import. I have the following information, all captured during the import process: Storage device is pegged making a large number of small transfers (iostat output): tty ipsd0 fd0 cpu tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id 1 110 17.41 2 0.03 0.00 0 0.00 0 0 0 0 99 0 36 15.91 108 1.68 0.00 0 0.00 3 0 8 0 88 0 12 16.00 90 1.41 0.00 0 0.00 1 0 7 1 91 0 12 16.00 96 1.50 0.00 0 0.00 2 0 9 0 88 0 12 16.00 84 1.31 0.00 0 0.00 3 0 5 0 91 mysqld process call frequency: 189549517 mysqld CALL gettimeofday 92175517 mysqld CALL read 46255517 mysqld CALL pwrite 46171517 mysqld CALL fsync 46067517 mysqld CALL write 5567517 mysqld CALL poll 4857517 mysqld CALL sigreturn 4455517 mysqld CALL sigprocmask 229517 mysqld CALL fcntl 106517 mysqld CALL clock_gettime 35517 mysqld CALL setitimer 13517 mysqld CALL lseek 10517 mysqld CALL open 10517 mysqld CALL access 9517 mysqld CALL fstat 9517 mysqld CALL close 8517 mysqld CALL pread 7517 mysqld CALL setsockopt 7517 mysqld CALL getsockname 7517 mysqld CALL accept 6517 mysqld CALL shutdown This is over a span of approx 6 minutes. (whats up with the gettimeofday calls...??) And lastly here is vmstat output for 30 seconds during the same import: procs memory pagedisks faults cpu r b w avmfre flt re pi po fr sr ip0 fd0 in sy cs us sy id 1 0 0 195588 411848 0 0 0 9 1 0 0 3370 291 1 1 99 0 1 0 195588 407521 0 0 0 2 0 82 0 4170 1458 3 7 90 0 1 0 195588 402240 0 0 0 0 0 83 0 4170 1505 3 6 91 0 1 0 195588 396960 0 0 0 0 0 94 0 4260 1654 3 7 89 0 1 0 195588 391040 0 0 0 0 0 83 0 4170 1477 3 7 90 0 1 0 195588 385920 0 0 0 1 0 95 0 4280 1671 4 9 88 My mysqld configuration is as follows: [mysqld] innodb_data_file_path = ibdata1:24M:autoextend set-variable = innodb_log_file_size=8M set-variable = innodb_log_buffer_size=4M set-variable = innodb_buffer_pool_size=8M set-variable = innodb_additional_mem_pool_size=2M set-variable = table_cache=128 set-variable = sort_buffer=8M set-variable = record_buffer=2M set-variable = key_buffer_size=32M set-variable = wait_timeout=60 My ibdata1 file is approx 180MB right now. It looks to me as though the mysqld process is blocking like crazy on I/O.. but I have not noticed poor performance with any other processing on the same system.. All filesystem based tests I can come up with (creation of many small files, large files, etc) pass with flying colours. Note that the throughput on the ipsd0 device (tranfer rate shown in iostat output listed above) is FAR greater than the amount of data that is (should be?) being read/written related to mysqld (nothing else is running on the host at this time.) Could this possibly be an ips driver bug in FreeBSD 5.2 that mysqld is somehow exploiting? Is there something amiss with the above call frequencies? Am I ignorant of a known issue? Is there some buffer-related configuration for mysqld that I might need to (re)specify? -mike