[algogeeks] Re: finding duplicate set of paranthesis

2011-12-11 Thread Gene
We talked about the problem of removing as many parentheses as possible: http://groups.google.com/group/algogeeks/browse_thread/thread/26ce36ad34dce9dc/e8fb40ab2ba0ecc0 You didn't define "duplicate." For (a*b)+c, the parens don't add any information. Should they be removed? The algorithm given

[algogeeks] Re: finding duplicate set of paranthesis

2011-12-10 Thread Lucifer
@rahul Algo for identifying the exact set of parenthesis to be removed would be: Take a stack named 'IndStk' to hold the indices of opening parenthesis. When a '(' parenthesis is encountered push it onto the stack and if a ')' parenthesis is encountered then pop from the top of the stack ( i.e th

[algogeeks] Re: finding duplicate set of paranthesis

2011-12-10 Thread Lucifer
@rahul.. there is one exception to the above rule and that is the case with unary minus operator.. for ex : a + (-c) + d will need one parentheses, but a + ( -c + d ) will only need 1 instead of 2.. But for the sake of unary minus u can assume that it will be enclosed within a parenthesis.. On D

[algogeeks] Re: finding duplicate set of paranthesis

2011-12-10 Thread Lucifer
@rahul... If you are plan to just find out the no. of duplicate parenthesis, then that would be ( No. of open Parenthesis - No. of Operators + 1) On Dec 10, 10:22 pm, rahul venkat wrote: > hi everyone, > > can u suggest an algorithm for finding the duplicate paranthesis in a given > expression ?

[algogeeks] Re: Finding duplicate

2006-01-05 Thread varadha rajan
hey pramod. i am sorry.it is not working 4 certain test casesOn 1/2/06, pramod < [EMAIL PROTECTED]> wrote:Hi Varadha,Could you present your logic please?

[algogeeks] Re: Finding duplicate

2006-01-02 Thread pramod
Hi Varadha, Could you present your logic please?

[algogeeks] Re: Finding duplicate

2005-12-22 Thread varadha rajan
hi guys   i am new 2 this group.i hav one solution to find duplicate num.   lp=rp=(n/2)+1   //lp-left ptr and rp-right ptr lh=rh=result=0 //lh-left half and rh-right half result^=num(lp)    //^--xor for(;(lp>-1||rp    {     if(lp>-1){     if(num(lp)^result==lh || num(lp)^result=

[algogeeks] Re: Finding duplicate

2005-12-20 Thread Vijay Venkat Raghavan N
Hi, First of all to adak. This problem cannot use your logic as even if the range is from 1 to 65000, as you have given for example, i can easily pick put 4 numbers like 1,10,456,65000. The Complexity of your algo is 65000, mine wud be 4lg4 = 8. Pramod: Great point man, even I thought it had to b

[algogeeks] Re: Finding duplicate

2005-12-08 Thread Dhyanesh
Hashing has collisions. What if all the numbers hash to the same position (in the worst case)? Then you will be back to square one. Hashing is just a hack for any problem ... not a proper solution. -Dhyanesh On 12/8/05, mathmoi <[EMAIL PROTECTED]> wrote: Dhyanesh wrote:> I agree it is faster but y

[algogeeks] Re: Finding duplicate

2005-12-08 Thread mathmoi
Dhyanesh wrote: > I agree it is faster but you cannot use it will all inputs. I gave you a > test case and you still havent come out on how to solve that. Your algorithm > is limited to a range of numbers only. It will be very slow even for numbers > like 2^31 or so which fit into the size of an

[algogeeks] Re: Finding duplicate

2005-12-04 Thread Dhyanesh
I think you are right on this point  ... my reduction to the element uniqueness problem is not perfect. Maybe there could exist a better algorithm or maybe there is another way to reduce it to element uniqueness. Maybe some one can come up with a faster way than O ( nlgn ) . -DhyaneshOn 12/4/05, p

[algogeeks] Re: Finding duplicate

2005-12-04 Thread pramod
But Dhyanesh, don't forget the condition that we are given that exactly one number is repeated twice. What you said is true for the general case. How do you prove that for this special, there's no algorithm that takes less than O(nlogn)? If such an algorithm exists and we give an input consisting

[algogeeks] Re: Finding duplicate

2005-12-04 Thread adak
I mentioned the range constraint of the algorithm previously (which I guess you ignored), My assertions are that it can be used with many sets of integers in the real world, and whenever it can be used, it is very fast, indeed. In academia, huge and negative numbers really have their place. In t

[algogeeks] Re: Finding duplicate

2005-12-03 Thread Dhyanesh
I agree it is faster but you cannot use it will all inputs. I gave you a test case and you still havent come out on how to solve that. Your algorithm is limited to a range of numbers only. It will be very slow even for numbers like 2^31 or so which fit into the size of an integer (and you can run i

[algogeeks] Re: Finding duplicate

2005-12-03 Thread adak
I don't see the contradiction. I'm unsure what the big 0 figure would be for this, but it is VERY much faster than quicksort, which I believe is O(n log(n)). I've used this for things like "sorting" and finding some unique distribution feature from a set, for many years. It can't be used for all

[algogeeks] Re: Finding duplicate

2005-12-03 Thread Dhyanesh
Hi Pramod, Suppose you can solve this problem better than O(nlog(n) ) . This means that given an array you would be able to find the duplicate number quicker than O(nlog(n) ). Using the same algo you would be able to determine that there no duplicate numbers, since a search for the duplicate numbe

[algogeeks] Re: Finding duplicate

2005-12-03 Thread adak
What is the range of random numbers here, Pramod? Are we talking about int's or long's or double's or what data type, exactly? When you said "numbers" in the original post, I was thinking "integers", but integers can be covered by the distribution counting algorithm, on modern compiler's. So wha

[algogeeks] Re: Finding duplicate

2005-12-03 Thread adak
Certainly I can! Oh no! My secret Cray in the basement secret is out!! Of course, this is a constraint to the algorithm. Fortunately, many times the random numbers are within a certain range which CAN be covered by an array's subscripts. And if it can be used, it's run time is faster than any

[algogeeks] Re: Finding duplicate

2005-12-02 Thread pramod
I agree with Dhyanesh. This "counting" sort routine prescribed by Adak fails when the numbers can be random. Dhyanesh, I searched for "element uniqueness" and that problem is more generic. It says to find if all the numbers are unique. But in my problem it is known that exactly one number is dupli

[algogeeks] Re: Finding duplicate

2005-12-02 Thread Dhyanesh
Btw how would your algo work if the numbers were { 1, 4, 5, 10^100 , 45, 5 } I dont believe you can create an array of size 10^100. This problem cannot be solved in better than O(nlgn). -Dhyanesh On 12/2/05, adak <[EMAIL PROTECTED]> wrote: Hi pramod,I'm not sure what the "right" name for th

[algogeeks] Re: Finding duplicate

2005-12-02 Thread adak
Hi pramod, I'm not sure what the "right" name for this is, if anybody knows I hope they jump in with that info for us. In the original problem, the set of numbers contained just ONE duplicate value. So let's say the set is { 3, 9, 6, 4, 4} The array that is built up by the loop would then be ha

[algogeeks] Re: Finding duplicate

2005-12-02 Thread Dhyanesh
Hi If the numbers are arbitrary then you can not do better than O ( n log (n ) ) ... this is a proven lower bound.  Google for "Element Uniqueness"  and you will find links to these proofs. -DhyaneshOn 12/2/05, pramod <[EMAIL PROTECTED]> wrote: I am sorry I did not get your solution.So if the giv

[algogeeks] Re: Finding duplicate

2005-12-02 Thread pramod
I am sorry I did not get your solution. So if the given array is say { 5, 1, 8, 100 } all are > 1, how will your algo work? Or you had counting sort in mind? for my problem, counting sort is ruled out.

[algogeeks] Re: Finding duplicate

2005-12-02 Thread adak
If it's possible for the array to encompass by subscript, the entire range of the numbers, then this is VERY fast. for each n, { array[n] += 1 if(array[n] > 1) /* you have found the duplicate n */ break; } I call this "doing a distribution count", but I'm sure there's a better term