On Sun, Mar 21, 2004 at 07:24:48PM -0800, Axel IS Main wrote:
I have a php app that updates an ever growing table with new
information on a regular basis. Very often the information is
duplicated. I'm currently handling this by checking the table for
duplicate values every time I go to add new data. As you can imagine,
as the table grows it takes longer and longer for this to happen, and
the process gets slower and slower. In order to speed things up I'm
wondering of it might not be a good idea to not allow duplication in a
given field. The question is, if there is a duplicate, how will MySQL
react? And what's the best way to manage that reaction? Also, will this
actually be faster than doing it the way I'm doing it now?



Perhaps you could hash all the field values into a single 32bit value,
then check for that value in the hash field. You might get a false
positive, but they will be few and far between.

--
Jim Richardson     http://www.eskimo.com/~warlock
The race isn't always to the swift, nor the battle to the strong,
But it's the safest way to bet.

Attachment: signature.asc
Description: Digital signature



Reply via email to