Re: Need to speed up deletes

2006-06-22 Thread Brent Baisley
You should try deleting in smaller batches if you can. I've found things go a lot quicker doing it that way and you don't lock up 
your table for long durations if you want to stick with MyISAM.


- Original Message - 
From: mos [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Wednesday, June 21, 2006 5:55 PM
Subject: Need to speed up deletes


I need to delete 200,000 rows from an 8 million row MyISAM table using an index. It's taking 20 minutes which is quite slow. Is 
there any way to speed it up? TIA


Mike
MySQL 4.1.10


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Need to speed up deletes

2006-06-21 Thread mos
I need to delete 200,000 rows from an 8 million row MyISAM table using an 
index. It's taking 20 minutes which is quite slow. Is there any way to 
speed it up? TIA


Mike
MySQL 4.1.10


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Need to speed up deletes

2006-06-21 Thread David Griffiths

Is the table heavily indexed? Indexes cause inserts and deletes (and updates 
under certain conditions) to slow down.

Also, since you are deleting approx 2.5% of your rows, your delete-statement must have a where-clause. You might want to do an 
explain-plan on it to make sure it's not an optimization issue.


I would look at your disk drives and their interface (ATA, SATA, SCSI). I'm 
sure there is a fair bit of disk-writing going on.

You can try to drop some un-needed indexes (if there are any) and see if that helps. You could also drop all indexes except the 
one with the least cardinality in the where-clause of your delete script. That might not be feasible, however.


Finally, try converting the table to InnoDB (you'll need to do some configuration in your my.cnf) - it tends to perform better in 
circumstances such as yours. I would do this on a test server first.


David



mos wrote:
I need to delete 200,000 rows from an 8 million row MyISAM table using 
an index. It's taking 20 minutes which is quite slow. Is there any way 
to speed it up? TIA


Mike
MySQL 4.1.10




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Need For SPEED

2003-08-21 Thread Creigh Shank
Using an Apache/PHP/MySQL/Linux (Redhat 8.0) solution, PHPList, to create 
an e-mailing list for our 5.6 million book club members.  Unfortunately, 
the import speed for importing records (at record number 150,000) is about 
2,000 records per hour.  We're running on the following:

P4 (1.5 Ghz), 1.2 Gbytes RAM (650 Mbytes RAM Disk using ramfs), IDE drive 
(7,200 rpm)

So far we've moved the MySQL data files (var/lib/mysql), PHP /tmp and 
upload directories and PHPlist web site files to RAM Disk (still just 
testing - not yet dealing with data safety issues).  With all of this 
tuning we're still at only 2,000 records per hour for uploading.

We need to be at 100,000 records per hour (uploading and sending seem to 
run at about the same rate - we need to be able to send to all book club 
members in the same week).  Any suggestions?

Creigh

(We're planning to run the system on a server with dual Opterons, 8 Gbytes 
RAM and RAID-5 SCSI drives, but I don't think the additional system 
horsepower will solve our problem.)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Need For SPEED

2003-08-21 Thread Roger Baklund
* Creigh Shank
 Using an Apache/PHP/MySQL/Linux (Redhat 8.0) solution, PHPList, to create
 an e-mailing list for our 5.6 million book club members.  Unfortunately,
 the import speed for importing records (at record number 150,000)
 is about
 2,000 records per hour.  We're running on the following:

 P4 (1.5 Ghz), 1.2 Gbytes RAM (650 Mbytes RAM Disk using ramfs), IDE drive
 (7,200 rpm)

I miss a number in the calculation: how big is each record (or the total
data file). If each record is 10MB, your speed is good, if it is 50 bytes,
your speed is lousy.

What method of import do you use?

 So far we've moved the MySQL data files (var/lib/mysql), PHP /tmp and
 upload directories and PHPlist web site files to RAM Disk (still just
 testing - not yet dealing with data safety issues).  With all of this
 tuning we're still at only 2,000 records per hour for uploading.

Do you mean you got no improvement moving to ramfs, or was it even worse
before? Anyhow, something is wrong, unless your records are very big.

You say 'uploading'? Is the import done remotely? What is the bandwidth? If
putting everything on ramfs didn't improve anything, I would guess the
network is the bottleneck.

 We need to be at 100,000 records per hour

28 rows/sec, shouldn't be a problem, if the records are reasonably small, at
least if you run the import locally on the server.

 (uploading and sending seem to
 run at about the same rate - we need to be able to send to all book club
 members in the same week).

sending? This looks more  more like a network issue... or maybe a PHPList
issue?

 Any suggestions?

Give more info. :)

--
Roger


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Need For SPEED

2003-08-21 Thread mos
At 01:02 PM 8/21/2003, you wrote:
Using an Apache/PHP/MySQL/Linux (Redhat 8.0) solution, PHPList, to create 
an e-mailing list for our 5.6 million book club members.  Unfortunately, 
the import speed for importing records (at record number 150,000) is about 
2,000 records per hour.  We're running on the following:

P4 (1.5 Ghz), 1.2 Gbytes RAM (650 Mbytes RAM Disk using ramfs), IDE drive 
(7,200 rpm)

So far we've moved the MySQL data files (var/lib/mysql), PHP /tmp and 
upload directories and PHPlist web site files to RAM Disk (still just 
testing - not yet dealing with data safety issues).  With all of this 
tuning we're still at only 2,000 records per hour for uploading.

We need to be at 100,000 records per hour (uploading and sending seem to 
run at about the same rate - we need to be able to send to all book club 
members in the same week).  Any suggestions?

Creigh

(We're planning to run the system on a server with dual Opterons, 8 Gbytes 
RAM and RAID-5 SCSI drives, but I don't think the additional system 
horsepower will solve our problem.)
What format is the data in that you are importing?
Are you using MyISAM data files and not InnoDb?
I've imported CSV data at around 1.5-2.5 million rows per minute using Load 
Data InFile (2.4ghz on Win2k). The problem with this command is the data 
being imported must be valid otherwise the row will be rejected and you 
will have a tough time finding out which row it was. MySQL Cookbook by Paul 
Dubois has some ideas on how to use Load Data InFile more reliably.

Even with Insert statements I'm getting around 1000 rows per seconds.
You can try inserting more than one row with a single Insert statement. 
This should speed things up some.
Or use transactions around 50-200 rows. (You will need to see how many 
rows/transaction gives you the best results)
Disable the keys (if possble while inserting) See DISABLE KEYS
Lock the table if possible while inserting the rows.

This should get you started. Paul Dubois has a section in his other book 
MySQL 2nd Edition on page 279.

Mike



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Need For SPEED

2003-08-21 Thread Creigh Shank
Using an Apache/PHP/MySQL/Linux (Redhat 8.0) solution, PHPList, to create 
an e-mailing list for our 5.6 million book club members.  Unfortunately, 
the import speed for importing records (at record number 150,000 the rate 
is about 2,000 records per hour).  We're running on the following:

P4 (1.5 Ghz), 1.2 Gbytes RAM (650 Mbytes RAM Disk using ramfs), IDE drive 
(72,00 rpm)

So far we've moved the MySQL data files (var/lib/mysql), PHP /tmp and 
upload directories and PHPlist web site files to RAM Disk (still just 
testing - not yet dealing with data safety issues).  With all of this 
tuning we're still at only 2,000 records per hour for uploading.

We need to be at 100,000 records per hour (uploading and sending seem to 
run at about the same rate - we need to be able to send to all book club 
members in the same week).  Any suggestions?

Creigh

(We're planning to run the system on a server with dual Opterons, 8 Gbytes 
RAM and RAID-5 SCSI drives, but I don't think the additional system 
horsepower will solve our problem.)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: [PHP-DB] Need For SPEED

2003-08-21 Thread John Stoffel
 Creigh == Creigh Shank [EMAIL PROTECTED] writes:

Creigh Using an Apache/PHP/MySQL/Linux (Redhat 8.0) solution,
Creigh PHPList, to create an e-mailing list for our 5.6 million book
Creigh club members.  Unfortunately, the import speed for importing
Creigh records (at record number 150,000 the rate is about 2,000
Creigh records per hour).  We're running on the following:

Creigh P4 (1.5 Ghz), 1.2 Gbytes RAM (650 Mbytes RAM Disk using
Creigh ramfs), IDE drive (72,00 rpm)

Creigh So far we've moved the MySQL data files (var/lib/mysql), PHP
Creigh /tmp and upload directories and PHPlist web site files to RAM
Creigh Disk (still just testing - not yet dealing with data safety
Creigh issues).  With all of this tuning we're still at only 2,000
Creigh records per hour for uploading.

Creigh We need to be at 100,000 records per hour (uploading and
Creigh sending seem to run at about the same rate - we need to be
Creigh able to send to all book club members in the same week).  Any
Creigh suggestions?

What is your data source for the records?  Are they in a bunch of flat
files?  If so, cut out the overhead of Apache/PHP and use Perl with
the DBI modules to insert the records directly into your database.

Also, make sure you have the proper indexes for your database that can
be a killer.  Also, for that size of DB, I'd make sure you're using
InnoDB tables.  

What are your mysql settings like as well?  You can tune them up quite
a bit since your import will be the big problem at first, but then it
will mostly be just reads on the tables when you send out emails.

John
   John Stoffel - Senior Unix Systems Administrator - Lucent Technologies
 [EMAIL PROTECTED] - http://www.lucent.com - 978-952-7830

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]