Hi, I'm managing a large free website running on Linux, with MySQL 4.0.23 and has 2Gb memory. (PIV 2,6 Ghz) MySQL is configured to have 382M key-buffer.
There are two problems I have, wich have to do with a large table. This table contains 5 million records and is the core of our application. It is a MyIsam table. It has two fields in the key. The table itsself is 1.2 Gb large and has an 200 Mb index, containing not only the key, but some additional indexes. The website is being visited by more then 100.000 unique visitors a day. Wich means a lot of selects and inserts, and a very small amount of updates and deletes. Problem 1; After some time (sometimes a week sometimes a month) it appears that the index of the table gets stuck. It tries to read from the table but does not get response. This causes the connectionqueue to fill up and the load on the system increases dramatically. In other words, unless I do an optimize table , the system hangs. Most of the times you see that the index is getting 20 Mb off. When I do check table (before optimizing) there are no errors. Is there any way to see this problem coming, so I can outrun it? (Without having to schedule optimize, wich = downtime, every week..) Problem 2; Trying to get the previous table clean, I created some jobs deleting old records. When I delete a lot of records at in one job, the system also nearly hangs. (+/- 10 to 15.000 records) The load again increases dramatically. I tried every trick in the book, but cannot understand, why this action is so heavy for the system. Is there anyone who can explain, why this is such a heavy operation? And what I can do more to optimize this process? A general question, I've seen that sometimes the system (mysqld) does a flush tables on itsself. I can't seem to find out when, why or for what reason? Does anyone know? Best regards, Almar van Pel -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]