@all: how is the problem solved using a heap...can someone explain. did not
understand what was on the net...
On Thu, Feb 3, 2011 at 2:23 AM, Avik Mitra wrote:
> I am proposing a solution for problem 2..
> >2.
> > Given a text file, implement a solution to find out if a pattern
> > similar to wi
I am proposing a solution for problem 2..
>2.
> Given a text file, implement a solution to find out if a pattern
> similar to wild cards can be detected.
> fort example find if a*b*cd*, or *win or *def* exists in the text.
Whatever be the pattern sort it must be regular expression. So in
principle
in hash table we can implement the binary search to find the correct
position of word to be inserted if it exist or not .
On Wed, Feb 2, 2011 at 11:22 PM, Wei.QI wrote:
> This is a standard map reduce problem.
>
> 1. reduce the file by word: sending words into machines based on it's val.
> 2. cou
This is a standard map reduce problem.
1. reduce the file by word: sending words into machines based on it's val.
2. count words, return top 10 words in each machine.
3. aggregate results together to get top 10.
-weiq
On Wed, Feb 2, 2011 at 8:46 AM, bittu wrote:
> @sankalp ...You seems to be r
@sankalp ...You seems to be right & also u told u like trie..so
could u elaborate the approach given by you by giving some example
algo or code..
although hastable/hashmap..is also alternative of such problem but
wants to see your explanation about this question .because a problem
can be solved
@above
I said some augmentation , that's why I said it (And also I like
tries :D)
If some non-determinism is condoned , may be you can use Rabin-Karp
method to improve upon storage .
On Feb 2, 1:28 pm, snehal jain wrote:
> @ above
> you approach trie needs lot of optimization.. this will take up
@Indore
Create a hash table of words, and get the top n counter from the hast count.
On Wed, Feb 2, 2011 at 1:58 PM, snehal jain wrote:
> @ above
> you approach trie needs lot of optimization.. this will take up lot of
> space...trie is suitable in case where we want to reduce search complexity
@ above
you approach trie needs lot of optimization.. this will take up lot of
space...trie is suitable in case where we want to reduce search complexity
and its space complexity is very bad.. so hashing should be better here as
compared to trie..
i think shashank's solution is better...
On Tue,
I think , as juver++ said , you should also try reading on the
internet about these kinds of problems .This can be solved with an
augmentation of a trie (keeping a count variable at the leaf
( maintaining a counter for all the word frequencies
accordingly )) .Just print the top ten results in the
Well its good Question Instead of Googling
I would like to give some naive approach for this.. which pays from
time & space
1st Counts the number or words in single large file
for this we can process this like
while (in.get(ch)) //as we read character by character from file
{
if ( c
Use google.
--
You received this message because you are subscribed to the Google Groups
"Algorithm Geeks" group.
To post to this group, send email to algogeeks@googlegroups.com.
To unsubscribe from this group, send email to
algogeeks+unsubscr...@googlegroups.com.
For more options, visit this g
11 matches
Mail list logo