Hi all,
Know that indexes are good for select(s), but very bad for massive
insert,update and delete.

If you want to do a massive delete with a cron, it's better to :
 * select the rows to delete (using indexes) 
 * delete indexes
 * delete rows (already marked)
 * recreate indexes

Another way if you want to delete a big percentage of your table, is to
copy the stating records, drop table and recreate it with those record.
Then recreate indexes.

I assume that you're not in a massive transactional situation, and maybe
myisam storage. If not, show processlist may help you to track using or
not of internal temporary tables, ...


Mathias

-----Original Message-----
From: Jigal van Hemert [mailto:[EMAIL PROTECTED] 
Sent: dimanche 24 avril 2005 17:05
To: Almar van Pel; mysql@lists.mysql.com
Subject: Re: Performance issues when deleting and reading on large table

From: "Almar van Pel"

> After some time (sometimes a week sometimes a month) it appears that
the
> index of the table gets stuck.
> It tries to read from the table but does not get response. This causes
the
> connectionqueue to fill up
> and the load on the system increases dramatically. In other words,
unless
I
> do an optimize table , the system
> hangs. Most of the times you see that the index is getting 20 Mb off.
> When I do check table (before optimizing) there are no errors.
>
> Is there any way to see this problem coming, so I can outrun it?
(Without
> having to schedule optimize, wich = downtime, every week..)

You should run optimize table regularly (once a week or so) in some
cases:
http://dev.mysql.com/doc/mysql/en/optimize-table.html

It's a probably a case of not having the cardinality of indexes right
and
thus making wrong decisions for queries.

> Trying to get the previous table clean, I created some jobs deleting
old
> records. When I delete a lot of records at in one job,
> the system also nearly hangs. (+/- 10 to 15.000 records) The load
again
> increases dramatically. I tried every trick in the book, but cannot
> understand,
> why this action is so heavy for the system.

Deleting a lot of records will have impact on the indexes, so it's quite
a
job. The inserts/updates/deletes will also block the table for reading
in
case of MyISAM.

Such a large table in a high concurrency situation (many writes and many
reads) can be a job for InnoDB tables.
They seem slow for small tables, but have the tendency to keep the same
speed for large tables, while MyISAM will probably get slower the bigger
the
table is under these circumstances.
If you can use the PRIMARY index in a query and keep the 'PRIMARY' index
as
short as possible, InnoDB can be a very fast table handler.

Depending on the size of the resulting record sets, your system must
have
enough memory to handle it. Otherwise a lot of temporary tables will end
up
on disk (slow) and also indexes cannot be loaded in memory (slow).

Running large databases is sometimes a bit of a challenge; finding the
right
queries, setting up the right index(es), etc.

Regards, Jigal.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to