On Monday 14 February 2005 03:52 am, Ben Clewett wrote:
> I am having a lot of problems deleting a large amount of data.  Say 20GB
> from a 40GB table.  I seem to get failure quite a lot (due NOT to mysql,
> but bad hardware), then MySQL roles back the transaction, which takes as
> many hours and starting the transaction.  I also get this a lot:

There is a feature of DB2 that can do this.. Its really not always all its 
cracked up to be..

In this case, it would happily delete, if something goes wrong, your table is 
now marked bad.. The other 20million rows are now gone.. Is that what you 
want?

What you need to do, is set up a simple script to delete 20,000 rows a time, 
and commit, just keep doing it till its done.. This way you could do 20,000 
rows, wait a bit, do it again. or whatever. If it fails, you only rollback 
what it was doing during the transaction and you wont have to start all over.

Jeff

Attachment: pgpd6v4TIoxmG.pgp
Description: PGP signature

Reply via email to