I agree it is faster but you cannot use it will all inputs. I gave you a test case and you still havent come out on how to solve that. Your algorithm is limited to a range of numbers only. It will be very slow even for numbers like 2^31 or so which fit into the size of an integer (and you can run it only if you have 2GB virtual memory). Give it such inputs and you will it will perform much much worse than doing the nlgn algorithm. One more thing how are you going to handle -ve numbers.

Tell me how fast it is with this set ... { - 2^30, -23, 64,1 ,4,1 , 2^31 } ... try  make it run faster than nlogn sorting.

-Dhyanesh

On 12/3/05, adak <[EMAIL PROTECTED]> wrote:

I don't see the contradiction. I'm unsure what the big 0 figure would
be for this, but it is VERY much faster than quicksort, which I believe
is O(n log(n)). I've used this for things like "sorting" and finding
some unique distribution feature from a set, for many years.

It can't be used for all sets because of the limits of the range of the
array, but on the ranges it can be used, it's faster by far than
anything else. Logically, it MUST be, because it consists of just one
increment of the array's value per set number. And of course, iterating
through the set itself, which ANY solution must do.

You can talk about contradictions all you want, but that's just talk.
Try this with a random integer number array of 10,000 elements, and
you'll KNOW exactly what I'm talking about. Compare it with Quicksort,
Heapsort, FlashSort, or any other algorithm you want to use that will
solve the original posted problem, even if there might be more than one
duplicate number in the set, it's the same solution algorithm, clearly.

Then you'll know that it's the answer, and your "contradictions" can be
put to rest.

adak


Reply via email to