Re: Deleting Records in Big tables

2011-11-10 Thread Johan De Meersman
- Original Message - From: mos mo...@fastmail.fm If you could use MyISAM tables then you could use Merge Tables and Ick, merge tables :-) If your version is recent enough (Isn't 4.whatever long out of support anyway?) you're much better off using partitioning - it's engine-agnostic

Re: Deleting Records in Big tables

2011-11-09 Thread mos
If you could use MyISAM tables then you could use Merge Tables and create a table for each day (or whatever period you are collecting data for). Then when it is time to get rid of the old data, drop the oldest table (T2001 or T10 for 10 days ago) and create a new empty table for the new

Deleting Records in Big tables

2011-11-04 Thread Adarsh Sharma
Dear all, Today I need to delete some records in 70 GB tables. I have 4 tables in mysql database. my delete command is :- delete from metadata where id2474; but it takes hours to complete. One of my table structure is as :- CREATE TABLE `metadata` ( `meta_id` bigint(20) NOT NULL

Re: Deleting Records in Big tables

2011-11-04 Thread Ananda Kumar
Why dont you create a new table where id 2474, rename the original table to _old and the new table to actual table name. or You need to write a stored proc to loop through rows and delete, which will be faster. Doing just a simple delete statement, for deleting huge data will take ages.

Re: Deleting Records in Big tables

2011-11-04 Thread Adarsh Sharma
Thanks Anand, Ananda Kumar wrote: Why dont you create a new table where id 2474, rename the original table to _old and the new table to actual table name. I need to delete rows from 5 tables each 50 GB , I don't have sufficient space to store extra data. My application loads 2 GB data

Re: Deleting Records in Big tables

2011-11-04 Thread Reindl Harald
Am 04.11.2011 08:22, schrieb Adarsh Sharma: delete from metadata where id2474; but it takes hours to complete. CREATE TABLE `metadata` ( `meta_id` bigint(20) NOT NULL AUTO_INCREMENT, `id` bigint(20) DEFAULT NULL, `url` varchar(800) DEFAULT NULL, `meta_field` varchar(200) DEFAULT

Re: Deleting Records in Big tables

2011-11-04 Thread Ananda Kumar
Create PROCEDURE qrtz_purge() BEGIN declare l_id bigint(20); declare NO_DATA INT DEFAULT 0; DECLARE LST_CUR CURSOR FOR select id from table_name where id 123; DECLARE CONTINUE HANDLER FOR NOT FOUND SET NO_DATA = -1; OPEN LST_CUR; SET NO_DATA = 0; FETCH LST_CUR INTO l_id;

Re: Deleting Records in Big tables

2011-11-04 Thread Reindl Harald
PLEASE do not top-post after you got a reply at the bottom of your quote sorry, but i can not help you with your application if it for whatever reason uses the filed 'id' in a where-statement and your table has no key on this column your table-design is wrong and you have to add the key yes this

Re: Deleting Records in Big tables

2011-11-04 Thread Johan De Meersman
- Original Message - From: Reindl Harald h.rei...@thelounge.net well i guess you have to sit out add the key wrong table design having an id-column without a key or something weird in the application not using the primary key for such operations For high-volume insert-only tables

Re: Deleting Records in Big tables

2011-11-04 Thread Andy Wallace
I've had some luck in the past under similar restrictions deleting in chunks: delete from my_big_table where id 2474 limit 1000 But really, the best way is to buy some more disk space and use the new table method On 11/4/11 1:44 AM, Adarsh Sharma wrote: Thanks Anand, Ananda Kumar wrote:

Re: Deleting Records in Big tables

2011-11-04 Thread Derek Downey
Be careful deleting with limit. If you're replicating, you're not guaranteed the same order of those you've deleted. Perhaps a better way to delete in smaller chunks is to increase the id value: DELETE FROM my_big_table WHERE id 5000; DELETE FROM my_big_table WHERE id 4000; etc -- Derek On

Re: Deleting Records in Big tables

2011-11-04 Thread Andy Wallace
Excellent point... replication makes many things trikier On 11/4/11 9:54 AM, Derek Downey wrote: Be careful deleting with limit. If you're replicating, you're not guaranteed the same order of those you've deleted. Perhaps a better way to delete in smaller chunks is to increase the id