Max Bube <maxbube <at> gmail.com> writes:
> The problem starts when I run bulk writes like an alter table or a restore > from mysqldump, its starts processing more than 50000 rows/s but suddenly > the ratio goes down to 100 rows /sec. and then its stucked at this ratio > even if I restart MySQL. The only way to get good perfomance again is > deleting all innodb files (ibdata, iblog files) and restoring the DB again. > > The DBs are relative small about 70M rows and 10Gb size. I can repeat this > behavior all the time just running 2 restores of the same database. > > Another example when its stucked: > > I want to delete 1M rows > "delete from table where id IN (select id from ....)" deletes 100 rows / > sec > but if I run 1 Million "delete from table where id = xxx" deletes 10000 rows > / sec How busy are your disks when you start seeing slowdown in the delete process? Are there blobs or big varchars in the deletes that you are doing? Innodb might be filling up its log files and when you see a slow down, it might be flushing the log to the disk. One workaround for this is to not delete million rows, but to delete in batches of 1000 rows. My guess would be that if each row is of size B, and you delete in a batch size of [innodb_log_file_size (in bytes) - 100 MB (in bytes)]/B , you should not see a slowdown. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=arch...@jab.org