Deleting Records in Big tables

2011-11-04 Thread Adarsh Sharma


Dear all,

Today I need to delete some records in  70 GB tables.
I have 4 tables in mysql database.

my delete command is :-

delete from metadata where id2474;

but it takes hours to complete.

One of my table structure is as :-

CREATE TABLE `metadata` (
 `meta_id` bigint(20) NOT NULL AUTO_INCREMENT,
 `id` bigint(20) DEFAULT NULL,
 `url` varchar(800) DEFAULT NULL,
 `meta_field` varchar(200) DEFAULT NULL,
 `meta_value` varchar(2000) DEFAULT NULL,
 `dt_stamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
 PRIMARY KEY (`meta_id`)
) ENGINE=InnoDB AUTO_INCREMENT=388780373 ;


Please let me know any quickest way to do this.
I tried to create indexes in these tables on id, but this too takes time.



Thanks

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Deleting Records in Big tables

2011-11-04 Thread Ananda Kumar
Why dont you create a new table where id  2474,
rename the original table to _old and the new table to actual table name.

or
You need to write a stored proc to loop through rows and delete, which will
be faster.

Doing just a simple delete statement, for deleting huge data will take
ages.

regards
anandkl

On Fri, Nov 4, 2011 at 12:52 PM, Adarsh Sharma adarsh.sha...@orkash.comwrote:


 Dear all,

 Today I need to delete some records in  70 GB tables.
 I have 4 tables in mysql database.

 my delete command is :-

 delete from metadata where id2474;

 but it takes hours to complete.

 One of my table structure is as :-

 CREATE TABLE `metadata` (
  `meta_id` bigint(20) NOT NULL AUTO_INCREMENT,
  `id` bigint(20) DEFAULT NULL,
  `url` varchar(800) DEFAULT NULL,
  `meta_field` varchar(200) DEFAULT NULL,
  `meta_value` varchar(2000) DEFAULT NULL,
  `dt_stamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`meta_id`)
 ) ENGINE=InnoDB AUTO_INCREMENT=388780373 ;


 Please let me know any quickest way to do this.
 I tried to create indexes in these tables on id, but this too takes time.



 Thanks

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?**unsub=anan...@gmail.comhttp://lists.mysql.com/mysql?unsub=anan...@gmail.com




Re: Deleting Records in Big tables

2011-11-04 Thread Adarsh Sharma

Thanks Anand,


Ananda Kumar wrote:
Why dont you create a new table where id  2474, 
rename the original table to _old and the new table to actual table 
name.
I need to delete rows from 5 tables each  50 GB ,  I don't have 
sufficient space to store extra data.

My application loads 2 GB data daily in my databases.


or
You need to write a stored proc to loop through rows and delete, which 
will be faster.


Can U provide me a simple example of stored proc


Doing just a simple delete statement, for deleting huge data will 
take ages.


Even the Create Index command on ID takes hours too complete.

I think there is no easiest way to delete that rows from mysql tables.







regards
anandkl

On Fri, Nov 4, 2011 at 12:52 PM, Adarsh Sharma 
adarsh.sha...@orkash.com mailto:adarsh.sha...@orkash.com wrote:



Dear all,

Today I need to delete some records in  70 GB tables.
I have 4 tables in mysql database.

my delete command is :-

delete from metadata where id2474;

but it takes hours to complete.

One of my table structure is as :-

CREATE TABLE `metadata` (
 `meta_id` bigint(20) NOT NULL AUTO_INCREMENT,
 `id` bigint(20) DEFAULT NULL,
 `url` varchar(800) DEFAULT NULL,
 `meta_field` varchar(200) DEFAULT NULL,
 `meta_value` varchar(2000) DEFAULT NULL,
 `dt_stamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
 PRIMARY KEY (`meta_id`)
) ENGINE=InnoDB AUTO_INCREMENT=388780373 ;


Please let me know any quickest way to do this.
I tried to create indexes in these tables on id, but this too
takes time.



Thanks

-- 
MySQL General Mailing List

For list archives: http://lists.mysql.com/mysql
To unsubscribe:  
 http://lists.mysql.com/mysql?unsub=anan...@gmail.com







Re: Deleting Records in Big tables

2011-11-04 Thread Reindl Harald


Am 04.11.2011 08:22, schrieb Adarsh Sharma:
 delete from metadata where id2474;
 but it takes hours to complete.
 
 CREATE TABLE `metadata` (
  `meta_id` bigint(20) NOT NULL AUTO_INCREMENT,
  `id` bigint(20) DEFAULT NULL,
  `url` varchar(800) DEFAULT NULL,
  `meta_field` varchar(200) DEFAULT NULL,
  `meta_value` varchar(2000) DEFAULT NULL,
  `dt_stamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`meta_id`)
 ) ENGINE=InnoDB AUTO_INCREMENT=388780373 ;
 
 Please let me know any quickest way to do this.
 I tried to create indexes in these tables on id, but this too takes time

well i guess you have to sit out add the key
wrong table design having an id-column without a key or
something weird in the application not using the primary
key for such operations




signature.asc
Description: OpenPGP digital signature


Re: Deleting Records in Big tables

2011-11-04 Thread Ananda Kumar
Create PROCEDURE qrtz_purge() BEGIN
   declare l_id bigint(20);
   declare NO_DATA INT DEFAULT 0;
   DECLARE LST_CUR CURSOR FOR select id from table_name where id 123;
   DECLARE CONTINUE HANDLER FOR NOT FOUND SET NO_DATA = -1;
   OPEN LST_CUR;
   SET NO_DATA = 0;
   FETCH LST_CUR INTO l_id;
   WHILE NO_DATA = 0 DO
  Delete from table_name where id=l_id
  COMMIT;
  SET NO_DATA = 0;
  FETCH LST_CUR INTO l_id;
   END WHILE;
   CLOSE LST_CUR;
   END

On Fri, Nov 4, 2011 at 2:40 PM, Reindl Harald h.rei...@thelounge.netwrote:



 Am 04.11.2011 08:22, schrieb Adarsh Sharma:
  delete from metadata where id2474;
  but it takes hours to complete.
 
  CREATE TABLE `metadata` (
   `meta_id` bigint(20) NOT NULL AUTO_INCREMENT,
   `id` bigint(20) DEFAULT NULL,
   `url` varchar(800) DEFAULT NULL,
   `meta_field` varchar(200) DEFAULT NULL,
   `meta_value` varchar(2000) DEFAULT NULL,
   `dt_stamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
   PRIMARY KEY (`meta_id`)
  ) ENGINE=InnoDB AUTO_INCREMENT=388780373 ;
 
  Please let me know any quickest way to do this.
  I tried to create indexes in these tables on id, but this too takes time

 well i guess you have to sit out add the key
 wrong table design having an id-column without a key or
 something weird in the application not using the primary
 key for such operations





Re: Deleting Records in Big tables

2011-11-04 Thread Reindl Harald
PLEASE do not top-post after you got a reply
at the bottom of your quote

sorry, but i can not help you with your application
if it for whatever reason uses the filed 'id' in a where-statement
and your table has no key on this column your table-design is
wrong and you have to add the key

yes this hurts in hughe tables but this is the price not looking at the
table-design at the very first begin of a project

Am 04.11.2011 11:00, schrieb Ananda Kumar:
 Create PROCEDURE qrtz_purge() BEGIN
declare l_id bigint(20);
declare NO_DATA INT DEFAULT 0;
DECLARE LST_CUR CURSOR FOR select id from table_name where id 123;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET NO_DATA = -1;
OPEN LST_CUR;
SET NO_DATA = 0;
FETCH LST_CUR INTO l_id;
WHILE NO_DATA = 0 DO
   Delete from table_name where id=l_id
   COMMIT;
   SET NO_DATA = 0;
   FETCH LST_CUR INTO l_id;
END WHILE;
CLOSE LST_CUR;
END
 
 On Fri, Nov 4, 2011 at 2:40 PM, Reindl Harald h.rei...@thelounge.netwrote:
 
 Am 04.11.2011 08:22, schrieb Adarsh Sharma:
 delete from metadata where id2474;
 but it takes hours to complete.

 CREATE TABLE `metadata` (
  `meta_id` bigint(20) NOT NULL AUTO_INCREMENT,
  `id` bigint(20) DEFAULT NULL,
  `url` varchar(800) DEFAULT NULL,
  `meta_field` varchar(200) DEFAULT NULL,
  `meta_value` varchar(2000) DEFAULT NULL,
  `dt_stamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`meta_id`)
 ) ENGINE=InnoDB AUTO_INCREMENT=388780373 ;

 Please let me know any quickest way to do this.
 I tried to create indexes in these tables on id, but this too takes time

 well i guess you have to sit out add the key
 wrong table design having an id-column without a key or
 something weird in the application not using the primary
 key for such operations



signature.asc
Description: OpenPGP digital signature


Re: Deleting Records in Big tables

2011-11-04 Thread Johan De Meersman
- Original Message -
 From: Reindl Harald h.rei...@thelounge.net
 
 well i guess you have to sit out add the key
 wrong table design having an id-column without a key or
 something weird in the application not using the primary
 key for such operations

For high-volume insert-only tables the lack of a key could be intentional; with 
the obvious downside that any non-insert operations are going to complete in 
O(fuck).

Adarsh, I suggest you first work out whether the lack of a primary key was 
intentional. If you decide that it is safe to add one (and that you actually do 
want one), your fastest route is going to be creating a second table 
identical to the original one but with the addition of the appropriate primary 
key, insert-select-ing the data you want to retain and switching out the tables 
as per Ananda's suggestion. That'll take rougly the same time as your delete 
operation (well, a bit more for the index build) but you now have a fully 
defragmented table with an index.

If, however, you find that adding a key is undesireable, I still recommend 
(also as per Ananda's suggestion) that you recreate and switchout the table.

You can't do it atomically, unfortunately - DML statements like rename table 
are never part of a larger transaction and can also not be executed while the 
affected tables are locked. Make sure to check wether no inserts have happened 
between the copy and the rename - shouldn't, really, if you type the commands 
as a single colon-divided line; but check anyway. Ideally, of course, clients 
are not even connected while performing this.

All of the above, however, is only relevant if you need to delete a LOT of the 
data (say, over half) - the create/insert path is going to have to write the 
full records to disk, which is much slower than just marking records as deleted 
in the existing table. If you are only going to delete a (relative) handful of 
records, just delete them and be done with it.

As for the segmented delete, I'm not sure that's going to be useful here. 
There's no usable keys, so regardless of how much records you delete, the table 
is going to be fully scanned anyway. I suspect that segmented deletes are going 
to yield only a fractional speed benefit, but multiply the total time by the 
number of segments you've cut the delete into.


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Deleting Records in Big tables

2011-11-04 Thread Andy Wallace

I've had some luck in the past under similar restrictions deleting in
chunks:

delete from my_big_table where id  2474 limit 1000

But really, the best way is to buy some more disk space and use the
new table method

On 11/4/11 1:44 AM, Adarsh Sharma wrote:

Thanks Anand,


Ananda Kumar wrote:

Why dont you create a new table where id  2474, rename the original table to 
_old and the new table to actual table name.

I need to delete rows from 5 tables each  50 GB ,  I don't have sufficient 
space to store extra data.
My application loads 2 GB data daily in my databases.


or
You need to write a stored proc to loop through rows and delete, which will be 
faster.


Can U provide me a simple example of stored proc


Doing just a simple delete statement, for deleting huge data will take ages.


Even the Create Index command on ID takes hours too complete.

I think there is no easiest way to delete that rows from mysql tables.







regards
anandkl

On Fri, Nov 4, 2011 at 12:52 PM, Adarsh Sharma adarsh.sha...@orkash.com 
mailto:adarsh.sha...@orkash.com wrote:


Dear all,

Today I need to delete some records in  70 GB tables.
I have 4 tables in mysql database.

my delete command is :-

delete from metadata where id2474;

but it takes hours to complete.

One of my table structure is as :-

CREATE TABLE `metadata` (
`meta_id` bigint(20) NOT NULL AUTO_INCREMENT,
`id` bigint(20) DEFAULT NULL,
`url` varchar(800) DEFAULT NULL,
`meta_field` varchar(200) DEFAULT NULL,
`meta_value` varchar(2000) DEFAULT NULL,
`dt_stamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`meta_id`)
) ENGINE=InnoDB AUTO_INCREMENT=388780373 ;


Please let me know any quickest way to do this.
I tried to create indexes in these tables on id, but this too
takes time.



Thanks

-- MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=anan...@gmail.com







--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
There are two ways to build software:
Make it so simple that there are obviously no bugs,
or make it so complex that there are no obvious bugs.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Deleting Records in Big tables

2011-11-04 Thread Derek Downey
Be careful deleting with limit. If you're replicating, you're not guaranteed 
the same order of those  you've deleted.

Perhaps a better way to delete in smaller chunks is to increase the id value:

DELETE FROM my_big_table WHERE id 5000;
DELETE FROM my_big_table WHERE id 4000;
etc

-- Derek

On Nov 4, 2011, at 12:47 PM, Andy Wallace wrote:

 I've had some luck in the past under similar restrictions deleting in
 chunks:
 
 delete from my_big_table where id  2474 limit 1000
 
 But really, the best way is to buy some more disk space and use the
 new table method
 
 On 11/4/11 1:44 AM, Adarsh Sharma wrote:
 Thanks Anand,
 
 
 Ananda Kumar wrote:
 Why dont you create a new table where id  2474, rename the original table 
 to _old and the new table to actual table name.
 I need to delete rows from 5 tables each  50 GB ,  I don't have sufficient 
 space to store extra data.
 My application loads 2 GB data daily in my databases.
 
 or
 You need to write a stored proc to loop through rows and delete, which will 
 be faster.
 
 Can U provide me a simple example of stored proc
 
 Doing just a simple delete statement, for deleting huge data will take 
 ages.
 
 Even the Create Index command on ID takes hours too complete.
 
 I think there is no easiest way to delete that rows from mysql tables.
 
 
 
 
 
 
 regards
 anandkl
 
 On Fri, Nov 4, 2011 at 12:52 PM, Adarsh Sharma adarsh.sha...@orkash.com 
 mailto:adarsh.sha...@orkash.com wrote:
 
 
 Dear all,
 
 Today I need to delete some records in  70 GB tables.
 I have 4 tables in mysql database.
 
 my delete command is :-
 
 delete from metadata where id2474;
 
 but it takes hours to complete.
 
 One of my table structure is as :-
 
 CREATE TABLE `metadata` (
 `meta_id` bigint(20) NOT NULL AUTO_INCREMENT,
 `id` bigint(20) DEFAULT NULL,
 `url` varchar(800) DEFAULT NULL,
 `meta_field` varchar(200) DEFAULT NULL,
 `meta_value` varchar(2000) DEFAULT NULL,
 `dt_stamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
 PRIMARY KEY (`meta_id`)
 ) ENGINE=InnoDB AUTO_INCREMENT=388780373 ;
 
 
 Please let me know any quickest way to do this.
 I tried to create indexes in these tables on id, but this too
 takes time.
 
 
 
 Thanks
 
 -- MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql?unsub=anan...@gmail.com
 
 
 
 
 
 -- 
 Andy Wallace
 iHOUSEweb, Inc.
 awall...@ihouseweb.com
 (866) 645-7700 ext 219
 --
 There are two ways to build software:
 Make it so simple that there are obviously no bugs,
 or make it so complex that there are no obvious bugs.
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=de...@orange-pants.com
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Deleting Records in Big tables

2011-11-04 Thread Andy Wallace

Excellent point... replication  makes many things trikier

On 11/4/11 9:54 AM, Derek Downey wrote:

Be careful deleting with limit. If you're replicating, you're not guaranteed 
the same order

 of those you've deleted.


Perhaps a better way to delete in smaller chunks is to increase the id value:

DELETE FROM my_big_table WHERE id  5000;
DELETE FROM my_big_table WHERE id  4000;
etc

-- Derek

On Nov 4, 2011, at 12:47 PM, Andy Wallace wrote:


I've had some luck in the past under similar restrictions deleting in
chunks:

delete from my_big_table where id  2474 limit 1000

But really, the best way is to buy some more disk space and use the
new table method

On 11/4/11 1:44 AM, Adarsh Sharma wrote:

Thanks Anand,


Ananda Kumar wrote:

Why dont you create a new table where id  2474, rename the original table to 
_old and the new table to actual table name.

I need to delete rows from 5 tables each  50 GB ,  I don't have sufficient 
space to store extra data.
My application loads 2 GB data daily in my databases.


or
You need to write a stored proc to loop through rows and delete, which will be 
faster.


Can U provide me a simple example of stored proc


Doing just a simple delete statement, for deleting huge data will take ages.


Even the Create Index command on ID takes hours too complete.

I think there is no easiest way to delete that rows from mysql tables.







regards
anandkl

On Fri, Nov 4, 2011 at 12:52 PM, Adarsh 
Sharmaadarsh.sha...@orkash.commailto:adarsh.sha...@orkash.com  wrote:


Dear all,

Today I need to delete some records in  70 GB tables.
I have 4 tables in mysql database.

my delete command is :-

delete from metadata where id2474;

but it takes hours to complete.

One of my table structure is as :-

CREATE TABLE `metadata` (
`meta_id` bigint(20) NOT NULL AUTO_INCREMENT,
`id` bigint(20) DEFAULT NULL,
`url` varchar(800) DEFAULT NULL,
`meta_field` varchar(200) DEFAULT NULL,
`meta_value` varchar(2000) DEFAULT NULL,
`dt_stamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`meta_id`)
) ENGINE=InnoDB AUTO_INCREMENT=388780373 ;


Please let me know any quickest way to do this.
I tried to create indexes in these tables on id, but this too
takes time.



Thanks

-- MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=anan...@gmail.com







--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
There are two ways to build software:
Make it so simple that there are obviously no bugs,
or make it so complex that there are no obvious bugs.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=de...@orange-pants.com






--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
There are two ways to build software:
Make it so simple that there are obviously no bugs,
or make it so complex that there are no obvious bugs.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: InnoDB #sql files

2011-11-04 Thread Reindl Harald


Am 04.11.2011 19:12, schrieb Ian Rubado:
 Hi there,
 
 I had the same issue as you posted about at the bottom of:
 
 http://bugs.mysql.com/bug.php?id=20867
 
 I was curious if you ever found a solution. I ended up converting tables to
 MyIsam and flushing my innodb files to resolve.

no, this bullshit files are staying here since summer 2009 and converting
a production dbmail-database to myisam would be the same as destroy it



signature.asc
Description: OpenPGP digital signature