Axel IS Main wrote:
I have a php app that updates an ever growing table with new information on a regular basis. Very often the information is duplicated. I'm currently handling this by checking the table for duplicate values every time I go to add new data. As you can imagine, as the table grows it takes longer and longer for this to happen, and the process gets slower and slower. In order to speed things up I'm wondering of it might not be a good idea to not allow duplication in a given field. The question is, if there is a duplicate, how will MySQL react? And what's the best way to manage that reaction? Also, will this actually be faster than doing it the way I'm doing it now?
Nick
Since the process is getting noticeably slower, I would guess you don't have an index on the columns in question. You may be able to speed up your current process to an acceptable level just by adding an appropriate index.
Better yet, if you add a UNIQUE INDEX to the appropriate column or group of columns, mysql will reject duplicates with "ERROR 1062: Duplicate entry...". I'm guessing your current code is something like this:
query mysql table for duplicate if no duplicate insert new data in mysql table if mysql returns error handle the error else whatever you do with duplicates
Once you add the unique index, you can change it to something like this:
insert new data if mysql returns error if error is duplicate row whatever you do with duplicates else handle other errors
If you need help with the index, put EXPLAIN in front of the SELECT query you currently use to check for duplicates and post the result.
Michael
-- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]