Given two arrays , one is 1D array and other is 2D array . Now u need to
find out whether 2D array contains the subset of 1D array . The elements
of 1D array not neccessary present in the same row it can be either on the
up or down but should be continuous in the 2D array ..
--
You received
from high level and with a very few collisions, it's O(1).
there isn't much to prove because this is based on a common sense.
for example, have the world as a hashtable and all countries as the keys of
it. boxes(storage) marked by different country names(key) and you know
where to locate them.
Are we talking about only positive integers here?
On Wed, Oct 26, 2011 at 11:33 PM, Vaibhav Mittal
vaibhavmitta...@gmail.comwrote:
+1 Prem
@ligerdave : I knew about the recursion method..but can u throw some light
on the pointer based method..(with a small example maybe)..
Specifically I
logic:
N=3.. k=5th(position).length...
no. of setbit :0...
000 k =5
no. of setbit :1.. on every loop get next number of same number of bits and
decrement k by 1.
001k = 4
010k=3
100k= 2
no. of setbit: 2
011 k=1..
101
110
Therefore answer is 011
complexity : O(n)...
If N Element is already hashed and when you insert next element , Does
hashing will take log N time to find the next available position? Actually
the next insertion typically does not have any co-relation to pre-existing
table content (Until your Hash function is badly designed to hash always on
suppose the problem is
Problem 4: Find all unique pairs of element in an array that sum to S. For
ex. If array = {2,4,6,4,6} and S = 8 then answer is {2,6, 4,4}
suppose we use STL c++ hash class to solve the problem . In the first we
hash all the elements . in the second pass we take each element
Forgot to mention this was asked in Amazon Interview ..
--
You received this message because you are subscribed to the Google Groups
Algorithm Geeks group.
To post to this group, send email to algogeeks@googlegroups.com.
To unsubscribe from this group, send email to
I am a bit confused. Are you talking about the runtime of hash function
itself or to find the position.
Agree on the efficient hash function design. However, no function could be
designed to fit all cases. A key to make a particular hash function
efficient is to know the potential data size.
Well, if we talk abt space complexity in Hash.. M srry we require O(n)
for Hash Datastructure... As each Bucket can contain at most one data
value of Key Value pair,thats where key is gonna hash your value..
However, when we talk abt time complexity, its alwayz, the Hash
Function dependable
@Prem :
So is the time complexity is O(1) or O(log n) or O(n)???
On 27 October 2011 07:28, Prem Krishna Chettri hprem...@gmail.com wrote:
Well, if we talk abt space complexity in Hash.. M srry we require O(n)
for Hash Datastructure... As each Bucket can contain at most one data
value of Key
I am asking for the above quesiton
On 27 October 2011 07:38, kumar raja rajkumar.cs...@gmail.com wrote:
@Prem :
So is the time complexity is O(1) or O(log n) or O(n)???
On 27 October 2011 07:28, Prem Krishna Chettri hprem...@gmail.com wrote:
Well, if we talk abt space complexity in
@Kumar: It's O(1) for searching an element in hash table as long as there is
no collision.
-Vicki
On Thu, Oct 27, 2011 at 10:38 AM, kumar raja rajkumar.cs...@gmail.comwrote:
@Prem :
So is the time complexity is O(1) or O(log n) or O(n)???
On 27 October 2011 07:28, Prem Krishna Chettri
Given a file containing roughly 300 million social security
numbers(9-digit numbers), find a 9-digit number that is not in the file.
You have unlimited drive space but only 2megabytes of RAM at your
disposal.
--
You received this message because you are subscribed to the Google Groups
Algorithm
Texas is coming for internship for computer science, so if anyone know
the pattern of exam or interview please tell..
--
You received this message because you are subscribed to the Google Groups
Algorithm Geeks group.
To post to this group, send email to algogeeks@googlegroups.com.
To
I think , in the worst case this hashing technique 'll take O(n^(k-1)) time?
so i wud stick with sorting idea with confirm : O(mlogm) + O(m) time
complexity where m is avg length of arrays.
On Tue, Oct 25, 2011 at 6:02 PM, kumar raja rajkumar.cs...@gmail.comwrote:
Dheeraj can u please tell me
One sentence answer to your query is all Hash DataStructure is suppose
to provide Constant Time Complexity .i.e.O(1). However as an when
HashTable fills up,Data Value collision occurs becoz,common key will
generated by HashFunction.Thats where HashFunction dependibility comes
into picture..
Well
Clearly this is an external sorting question..
Merge sort Can Be Used..
On 10/27/11, shiva@Algo shiv.jays...@gmail.com wrote:
Given a file containing roughly 300 million social security
numbers(9-digit numbers), find a 9-digit number that is not in the file.
You have unlimited drive space
I've been working on similar things lately.
1st, if the file is unsorted, you'll have to just go through it, it'll be
O(n) worst case, which is preferred rather than sorting it.
2nd if it is sorted, read and store the maximum chunk of the file that you
can, a program is allotted 4 GB of address
I guess this is just like finding a word in a matrix
On Thu, Oct 27, 2011 at 7:32 PM, SAMMM somnath.nit...@gmail.com wrote:
Forgot to mention this was asked in Amazon Interview ..
--
You received this message because you are subscribed to the Google Groups
Algorithm Geeks group.
To post to
small typo.. in part 2) it should say 13-(1+4+6)=2. Basically
when the position becomes smaller than the element, you stop subtracting.
--
You received this message because you are subscribed to the Google Groups
Algorithm Geeks group.
To post to this group, send email to
Hashing all of the K arrays seems like a bit much. How about this?
You have K seperate arrays to start with, each array having N
elements (is that correct?).
1) Sort the first array.
2) Step through the 2nd array, 1 element at a time say
Array(2).element(i)
Check to see if the
read the question guys, only 2 mb of memory...
i would perhaps try to find the range of the numbers... say the range is 300
million here...
so i would maintain a count array, count[i]+=1 if on traversing the large
file i find a number from chunk i, here chunk can be decided suitably, for
example
@Shiva: Using an integer array a[10], initialized to 0, read
through the file and for each number n, increment a[n%10]. At the
end of the file, find any k such that a[k] != 1. Then read through
the file again. For any number n such that n%10 == k, set a[n/
10] = -1. At the end
Don,
As you said, the intersection set, won't really be in sorted order as it
depends on the elements of the second array, which are unsorted. Still,
sorting them wouldn't be much different as it'd be worst case O(n logn).. [
Array 2 == Array 1 ]
But sorting the First Array has already cost O(n
24 matches
Mail list logo