I have found that by dumping with --extended-insert, the subsequent import is MUCH faster. Of course it only issues 450 odd queries for the data set instead of ~200k...
-mike [EMAIL PROTECTED] 12/23/03 05:02 PM To [EMAIL PROTECTED] cc Subject Strange performance problem importing dumped data (3.23.58 on FreeBSD 5.2-RC1) I am using 3.23.58 (server and client) on FreeBSD 5.2-RC1 and all of what I am about to describe is being performed locally. When importing dumped data (with something like mysql dbname < dbname.dump) the import is being performed at ~60 queries a second. This is MUCH slower than when I was running 3.23.51 on FreeBSD 4.x-STABLE. My data set is approx 200k rows, so this isn't really acceptable for a ~45MB import. I have the following information, all captured during the import process: Storage device is pegged making a large number of small transfers (iostat output): tty ipsd0 fd0 cpu tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id 1 110 17.41 2 0.03 0.00 0 0.00 0 0 0 0 99 0 36 15.91 108 1.68 0.00 0 0.00 3 0 8 0 88 0 12 16.00 90 1.41 0.00 0 0.00 1 0 7 1 91 0 12 16.00 96 1.50 0.00 0 0.00 2 0 9 0 88 0 12 16.00 84 1.31 0.00 0 0.00 3 0 5 0 91 mysqld process call frequency: 189549 517 mysqld CALL gettimeofday 92175 517 mysqld CALL read 46255 517 mysqld CALL pwrite 46171 517 mysqld CALL fsync 46067 517 mysqld CALL write 5567 517 mysqld CALL poll 4857 517 mysqld CALL sigreturn 4455 517 mysqld CALL sigprocmask 229 517 mysqld CALL fcntl 106 517 mysqld CALL clock_gettime 35 517 mysqld CALL setitimer 13 517 mysqld CALL lseek 10 517 mysqld CALL open 10 517 mysqld CALL access 9 517 mysqld CALL fstat 9 517 mysqld CALL close 8 517 mysqld CALL pread 7 517 mysqld CALL setsockopt 7 517 mysqld CALL getsockname 7 517 mysqld CALL accept 6 517 mysqld CALL shutdown This is over a span of approx 6 minutes. (whats up with the gettimeofday calls...??) And lastly here is vmstat output for 30 seconds during the same import: procs memory page disks faults cpu r b w avm fre flt re pi po fr sr ip0 fd0 in sy cs us sy id 1 0 0 195588 41184 8 0 0 0 9 1 0 0 337 0 291 1 1 99 0 1 0 195588 40752 1 0 0 0 2 0 82 0 417 0 1458 3 7 90 0 1 0 195588 40224 0 0 0 0 0 0 83 0 417 0 1505 3 6 91 0 1 0 195588 39696 0 0 0 0 0 0 94 0 426 0 1654 3 7 89 0 1 0 195588 39104 0 0 0 0 0 0 83 0 417 0 1477 3 7 90 0 1 0 195588 38592 0 0 0 0 1 0 95 0 428 0 1671 4 9 88 My mysqld configuration is as follows: [mysqld] innodb_data_file_path = ibdata1:24M:autoextend set-variable = innodb_log_file_size=8M set-variable = innodb_log_buffer_size=4M set-variable = innodb_buffer_pool_size=8M set-variable = innodb_additional_mem_pool_size=2M set-variable = table_cache=128 set-variable = sort_buffer=8M set-variable = record_buffer=2M set-variable = key_buffer_size=32M set-variable = wait_timeout=60 My ibdata1 file is approx 180MB right now. It looks to me as though the mysqld process is blocking like crazy on I/O.. but I have not noticed poor performance with any other processing on the same system.. All filesystem based tests I can come up with (creation of many small files, large files, etc) pass with flying colours. Note that the throughput on the ipsd0 device (tranfer rate shown in iostat output listed above) is FAR greater than the amount of data that is (should be?) being read/written related to mysqld (nothing else is running on the host at this time.) Could this possibly be an ips driver bug in FreeBSD 5.2 that mysqld is somehow exploiting? Is there something amiss with the above call frequencies? Am I ignorant of a known issue? Is there some buffer-related configuration for mysqld that I might need to (re)specify? -mike