Ben Clewett wrote:

Jeff,

Thanks for your idea. Deleting data x rows at a time would certainly help, if AUTOCOMMIT=TRUE. But I have no idea how :) I have tried:

DELETE FROM very_large_table WHERE delete_flag=1 LIMIT 100000;

But the LIMIT is not understood (4.1.8).

What do you mean? That appears to be valid syntax. Do you get an error or unexpected results? (You might also want to try a smaller limit).


Unfortunately my 'delete_flag' is not key. Therefore repeated attempts at deleting small amounts of data is very slow.

Right, no index on delete_flag means a table scan. The LIMIT N should cause mysql to stop as soon as it finds (and deletes) the Nth matching row, though. The trick is to set N low enough to get the speed you need. Unfortunately, each run will probably take longer than the one before, as there will be farther to go to find the first match.


I expect delete_flag is either 0 or 1. What percent of rows have delete_flag = 1? If small enough, an index on delete_flag would be used for this statement to speed things up. Of course, that's an extra index probably not used for anything else.

If you haven't read them already, you may find these pages from the manual helpful:
<http://dev.mysql.com/doc/mysql/en/delete.html>
<http://dev.mysql.com/doc/mysql/en/delete-speed.html>


The fastest way seem to be to dump the data, edit the file, and re-insert the data.

But you have given my my solution:

If I cannot disable transactions, I'll have to work with one of the keys and iterate through that key bit bit. So thanks, I'll go off and give it a go...

Ben.

Michael


-- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to