Jeff,

Thanks for your idea. Deleting data x rows at a time would certainly help, if AUTOCOMMIT=TRUE. But I have no idea how :) I have tried:

DELETE FROM very_large_table WHERE delete_flag=1 LIMIT 100000;

But the LIMIT is not understood (4.1.8).

Unfortunately my 'delete_flag' is not key. Therefore repeated attempts at deleting small amounts of data is very slow.

The fastest way seem to be to dump the data, edit the file, and re-insert the data.

But you have given my my solution:

If I cannot disable transactions, I'll have to work with one of the keys and iterate through that key bit bit. So thanks, I'll go off and give it a go...

Ben.


Jeff Smelser wrote:
On Monday 14 February 2005 03:52 am, Ben Clewett wrote:

I am having a lot of problems deleting a large amount of data.  Say 20GB
from a 40GB table.  I seem to get failure quite a lot (due NOT to mysql,
but bad hardware), then MySQL roles back the transaction, which takes as
many hours and starting the transaction.  I also get this a lot:


There is a feature of DB2 that can do this.. Its really not always all its cracked up to be..

In this case, it would happily delete, if something goes wrong, your table is now marked bad.. The other 20million rows are now gone.. Is that what you want?

What you need to do, is set up a simple script to delete 20,000 rows a time, and commit, just keep doing it till its done.. This way you could do 20,000 rows, wait a bit, do it again. or whatever. If it fails, you only rollback what it was doing during the transaction and you wont have to start all over.

Jeff

-- Ben Clewett Road Tech Computer System Ltd [EMAIL PROTECTED] http://www.roadrunner.uk.com +44(0)1923 460000

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to