We talked about the problem of removing as many parentheses as
possible:
http://groups.google.com/group/algogeeks/browse_thread/thread/26ce36ad34dce9dc/e8fb40ab2ba0ecc0
You didn't define "duplicate." For (a*b)+c, the parens don't add any
information. Should they be removed? The algorithm given
@rahul
Algo for identifying the exact set of parenthesis to be removed would
be:
Take a stack named 'IndStk' to hold the indices of opening
parenthesis. When a '(' parenthesis is encountered push it onto the
stack and if a ')' parenthesis is encountered then pop from the top of
the stack ( i.e th
@rahul..
there is one exception to the above rule and that is the case with
unary minus operator..
for ex : a + (-c) + d will need one parentheses,
but a + ( -c + d ) will only need 1 instead of 2..
But for the sake of unary minus u can assume that it will be enclosed
within a parenthesis..
On D
@rahul...
If you are plan to just find out the no. of duplicate parenthesis,
then that would be ( No. of open Parenthesis - No. of Operators + 1)
On Dec 10, 10:22 pm, rahul venkat wrote:
> hi everyone,
>
> can u suggest an algorithm for finding the duplicate paranthesis in a given
> expression ?
hey pramod. i am sorry.it is not working 4 certain test casesOn 1/2/06, pramod <
[EMAIL PROTECTED]> wrote:Hi Varadha,Could you present your logic please?
Hi Varadha,
Could you present your logic please?
hi guys
i am new 2 this group.i hav one solution to find duplicate num.
lp=rp=(n/2)+1 //lp-left ptr and rp-right ptr
lh=rh=result=0 //lh-left half and rh-right half
result^=num(lp) //^--xor
for(;(lp>-1||rp
{
if(lp>-1){
if(num(lp)^result==lh || num(lp)^result=
Hi,
First of all to adak. This problem cannot use your logic as even if the
range is from 1 to 65000, as you have given for example, i can easily
pick put 4 numbers like 1,10,456,65000. The Complexity of your algo is
65000, mine wud be 4lg4 = 8.
Pramod: Great point man, even I thought it had to b
Hashing has collisions. What if all the numbers hash to the same
position (in the worst case)? Then you will be back to square one.
Hashing is just a hack for any problem ... not a proper solution.
-Dhyanesh
On 12/8/05, mathmoi <[EMAIL PROTECTED]> wrote:
Dhyanesh wrote:> I agree it is faster but y
Dhyanesh wrote:
> I agree it is faster but you cannot use it will all inputs. I gave you a
> test case and you still havent come out on how to solve that. Your algorithm
> is limited to a range of numbers only. It will be very slow even for numbers
> like 2^31 or so which fit into the size of an
I think you are right on this point ... my reduction to the
element uniqueness problem is not perfect. Maybe there could exist a
better algorithm or maybe there is another way to reduce it to element
uniqueness. Maybe some one can come up with a faster way than O ( nlgn
) .
-DhyaneshOn 12/4/05, p
But Dhyanesh, don't forget the condition that we are given that exactly
one number is repeated twice. What you said is true for the general
case. How do you prove that for this special, there's no algorithm that
takes less than O(nlogn)? If such an algorithm exists and we give an
input consisting
I mentioned the range constraint of the algorithm previously (which I
guess you ignored), My assertions are that it can be used with
many sets of integers in the real world, and whenever it can be used,
it is very fast, indeed.
In academia, huge and negative numbers really have their place. In t
I agree it is faster but you cannot use it will all inputs. I gave you
a test case and you still havent come out on how to solve that. Your
algorithm is limited to a range of numbers only. It will be very slow
even for numbers like 2^31 or so which fit into the size of an integer
(and you can run i
I don't see the contradiction. I'm unsure what the big 0 figure would
be for this, but it is VERY much faster than quicksort, which I believe
is O(n log(n)). I've used this for things like "sorting" and finding
some unique distribution feature from a set, for many years.
It can't be used for all
Hi Pramod,
Suppose you can solve this problem better than O(nlog(n) ) . This means
that given an array you would be able to find the duplicate number
quicker than O(nlog(n) ). Using the same algo you would be able to
determine that there no duplicate numbers, since a search for the
duplicate numbe
What is the range of random numbers here, Pramod?
Are we talking about int's or long's or double's or what data type,
exactly? When you said "numbers" in the original post, I was thinking
"integers", but integers can be covered by the distribution counting
algorithm, on modern compiler's.
So wha
Certainly I can!
Oh no! My secret Cray in the basement secret is out!!
Of course, this is a constraint to the algorithm. Fortunately, many
times the random numbers are within a certain range which CAN be
covered by an array's subscripts.
And if it can be used, it's run time is faster than any
I agree with Dhyanesh. This "counting" sort routine prescribed by Adak
fails when the numbers can be random.
Dhyanesh, I searched for "element uniqueness" and that problem is more
generic. It says to find if all the numbers are unique. But in my
problem it is known that exactly one number is dupli
Btw how would your algo work if the numbers were
{ 1, 4, 5, 10^100 , 45, 5 }
I dont believe you can create an array of size 10^100.
This problem cannot be solved in better than O(nlgn).
-Dhyanesh
On 12/2/05, adak <[EMAIL PROTECTED]> wrote:
Hi pramod,I'm not sure what the "right" name for th
Hi pramod,
I'm not sure what the "right" name for this is, if anybody knows I hope
they jump in with that info for us.
In the original problem, the set of numbers contained just ONE
duplicate value. So let's say the set is { 3, 9, 6, 4, 4}
The array that is built up by the loop would then be ha
Hi
If the numbers are arbitrary then you can not do better than O ( n log
(n ) ) ... this is a proven lower bound. Google for "Element
Uniqueness" and you will find links to these proofs.
-DhyaneshOn 12/2/05, pramod <[EMAIL PROTECTED]> wrote:
I am sorry I did not get your solution.So if the giv
I am sorry I did not get your solution.
So if the given array is say { 5, 1, 8, 100 }
all are > 1, how will your algo work?
Or you had counting sort in mind? for my problem, counting sort is
ruled out.
If it's possible for the array to encompass by subscript, the entire
range of the numbers, then this is VERY fast.
for each n, {
array[n] += 1
if(array[n] > 1) /* you have found the duplicate n */
break;
}
I call this "doing a distribution count", but I'm sure there's a better
term
24 matches
Mail list logo