I don't feel like I have enough information to write a code snippet, 
but here are a few thoughts:

1.

I suppose you are aware of DISTINCT VALUES.
but if you suspect that the indexes are somehow corrupt, 
you can no longer trust the command either.

2.

the duplicates could be null.
by definition, you can't index null because it is the absence of a value.
you need to scan all records to know which records are null.
however you can tell that there are multiple nulls if the field is unique and 
there are more records than the number of distinct values plus one.

3. 

instead of Find in array,
you can sort the data set and Find in sorted array.
it is faster.

4. 

prevention is better than treatment.
you can set appropriate field attributes,
set an index,
use Find in field,
to avoid duplicates in the first place.
of course you can always get duplicates after switching data files.
sometimes you might have to disable field constraints in order to fix the 
problem.
for example, if a unique field has more than 3 duplicates,
you can't assign a new value to the first found duplicate because it is a 
duplicate nonetheless.
**********************************************************************
4D Internet Users Group (4D iNUG)
New Forum: https://discuss.4D.com
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**********************************************************************

Reply via email to