Gene wrote:
I'm sorry, but the response below is wrong on several levels. See
notes.
C Erler wrote:
Because there are infinite integers, there is no need for a list of
4,300,000,000 of them to contain a duplicate. If there are
4,299,999,999 choices, there will be one duplicate.
The stack can be implemented as an array and a variable that will keep
the current top of the stack .. the coding is trivial once you
understand this
pseudocode:
Array A;
int current;
function pop()
return A[current--];
function push(object o)
A[++current] = o; (This is where we
Hi,
This can be mathematically proved if the heuristic you use is less than
or equal to the actual cost ( the straight line heuristic is a natural
example ).
The proof is as follows : You will maintain a list of nodes with their
costs. You will goal - test the least cost node. If it is not a
Terry wrote:
This problem is from the book programming pearls . Can anyone explain
me the soln. How does he do it.
Given a file containing 4,300,000,000 integers, how
can you find one that appears at least twice? (Explain, first,
why this must happen.)
Binary search finds an element
Terry wrote:
Gene wrote:
I'm sorry, but the response below is wrong on several levels. See
notes.
C Erler wrote:
Because there are infinite integers, there is no need for a list of
4,300,000,000 of them to contain a duplicate. If there are
4,299,999,999 choices, there will be
Can you use some sort of hash table to store all elements of the first
list, then check whether it contains each element of the second ? This
might be O(n) in most cases.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google
Thanks for your reply. These two lists keep changing on every
iteration(every iteration gives a new sorted list),I have to find the
common elements in two frest lists on every iteration. So if I create a
hash table for one of the lists, then the hash table itself would have
to be recreated
Amith,
I did think of the summation that you mentioned, however it would take
O(n) time to sum up the elements and a further O(n) to do the
checking(the summed elements are not sorted) so it would totally take
about O(n^2) which is not desirable.
Your replies indicate a misunderstanding of asymptotic notation. If
you do several things in a row, to get the asymptotic performance, you
take the worst performer's speed and use that.
For instance, with the hash, you do several things in a row :
Create the hash: O(n)
Fill the hash: O(n)
However I have another problem, there may be 1 to 5 such
coordinates in each list. will the has table become unweildy in such a
case?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
Algorithm Geeks group.
10 matches
Mail list logo