do it and find out :)  i dont think the hashing of the collections
classes have anything against such high object counts.  it might just
be a concern of memory.

on average, do you expect all 2 million strings to be unique?  how
often do you expect duplicates?

you could do the processing in smaller batches.   all even hash codes
then all odd ones.  but i'd avoid doing anything like that if you can.

you're normal Arrays.sort() is pretty good i believe.

On Sep 4, 1:14 am, Barney <barney.h...@gmail.com> wrote:
> Is it realistic to use HashSet to determine if a large amount of
> string data (2 000 000 strings of length 20) is composed of unique
> entry ?
>
> If not, is it realistic, in a more general way, to quicksort this
> large amount of string data in memory (not using an extern or file
> quicksort) ?
>
> Thank you.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "The 
Java Posse" group.
To post to this group, send email to javaposse@googlegroups.com
To unsubscribe from this group, send email to 
javaposse+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/javaposse?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to