[algogeeks] Re: Complexity of Algorithms

2010-05-08 Thread Ralph Boland


On May 5, 7:59 am, Varun Nagpal varun.nagp...@gmail.com wrote:
 Complexity of an algorithms is focussed on two aspects: Time it takes to
 execute the algorithm(Time Complexity) and the amount of space in memory it
 takes to store the associated data(Space Complexity). Most literature in
 computer science focuses on Time Complexity as it directly influences the
 performance of algorithm.

For data structures there is often three complexities.
   1) Time to build the data structure.  (e.g. build a balance binary
tree in linear time).
   2) Space required by data structure.  (e.g.  tree requires linear
space).
   3) Time to use the data structure to gather some piece of
information.
   (e.g. find leaf node from root node in O(log n) time.

 The complexity of an algorithm is usually based on a model of machine on
 which it  will execute. The most popular model is
 RAMhttp://en.wikipedia.org/wiki/Random_access_machineor Random
 Access Machine Model. Simple assumption of this machine model is
 that every operation(arithmetic and logic) takes unit or single step and
 each of this step takes some constant time. So by finding the number of
 steps it takes to execute the algorithm, you can find the total time it
 takes to execute the algorithm.  In this sense Unit Time or Unit Step are
 considered equivalent or synonymous. Although RAM is not accurate model of
 actual machine, but its main advantage is that it allows a machine
 independent analysis  comparison of algorithms.

 So, the Time Complexity of an algorithm is measured in terms of the total
 number of steps the algorithm takes before it terminates. It is expressed
 usually as a function of Input Size or problem size. Input size can have
 different meanings, but for simplicity you can assume it to be number of
 objects that is given as input to the algorithm(say N). An object could mean
 an integer, character etc.  Now if T(N) is the time complexity of the
 algorithm

 T(N) = Number of steps(or time) it takes to execute the algorithm.

 T(N) could be a any mathematical function like a function in constant ,
 linear multiple of N function , polynomial in N function, poly-logarithmic
  function in N, or Exponential function in N etc.

 Finding the Time Complexity of an algorithm basically involves analysis from
 three perspectives: worst case execution time, average case execution
 time and best case execution time. The algorithm will take different number
 of steps for different class of inputs or different instances of input. For
 some class of inputs, it will take least time(best case). For another class
 of inputs it will take some maximum time(worst case).

 Average case execution time analysis requires finding average(finding
 expectation in statistical terms) of the number of execution steps for each
 and every possible class of inputs.

 Best case execution time is seldom of any importance. Average case execution
 time is sometimes important but most important is Worst Case execution time
 as it tells you the upper bound on the execution time and so tells you lower
 bounds on obtainable performance.

I tend to think average case is more important than worst case.
Which is more important:  the average case for quicksort or the
worst case for quicksort?
One of the reasons once sees worst case analysis much more than
average case analysis is that average case analysis is usually much
harder to do, for example the worst case and average case analysis
of quicksort.


 In Computer science though, expressing T(N) as a pure mathematical
 function is seldom given importance. More important is knowing asymptotic
 behavior of algorithm or asymptotic growth rate i.e how quickly does T(N)
 grows as N goes to a extremely large values(approaching infinity or exhibits
 asymptotic behavior).

 So instead of expressing T(N) as a pure and precise mathematical
 function, different other notations have been devised.
 As far as I know, there are at least 5 notations used to express T(N) namely
 Big-O (O), Small-o(o), Big-Omega(Ω), Small-omega(w), Theta(*Θ). *

 Big-O is used for representing upper bound(worst case), while Big-Omega is
 for expressing lower bounds(best case). Small or Little notations are more
 stricter notations. Theta notation is used for expressing those functions
 whose upper and lower bounds are  same or constant multiple of the same
 function


One should be careful not to confuse upper bound and worst case (or
lower bound
and best case).  One can determine an upper bound on the best case
performance
of an algorithm and similarly determine a lower bound on the worst
case performance!
One can also determine an upper bound and lower bound on the average
case
performance.  If these are the same then theta notation can be used
to describe average case performance.
For example a lower bound on the average case performance of quicksort
is omega(n)
and an upper bound on the average case performance  is O(n * n).
Of course, if you are smart, you can 

[algogeeks] Yahoo coding round question

2010-05-08 Thread Jitendra Kushwaha
Find the longest common subsequence of given N strings each having
length between 0 to M.
Can anybody give a good approach to the solutions

-- 
You received this message because you are subscribed to the Google Groups 
Algorithm Geeks group.
To post to this group, send email to algoge...@googlegroups.com.
To unsubscribe from this group, send email to 
algogeeks+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/algogeeks?hl=en.



[algogeeks] Re: a puzzle

2010-05-08 Thread Afroz Mohiuddin
May not be only rotation, maybe rotation, flipping etc, just a
orthogonal/orthonormal transformation matrix

On Sat, May 8, 2010 at 9:19 AM, Afroz Mohiuddin afrozena...@gmail.comwrote:

 I have a set of n-dimensional vectors, lets say, v1, v2, v3, ... vm

 vi, vj means the dot product of vi and vj.

 It is known that vi, vj = 0 forall i, j  i.e. the angle between any
 two of them is less equal to 90 degrees, which basically means that all of
 them can be occupied in a single quadrant in the n-dimensional space.
 Because in any quadrant, the maximum angle between two vectors both lying
 inside the quadrant is 90 degrees.

 Can you come up with a orthonormal transformation U, such that Uvi = 0 ...
 i.e. an orthonormal transformation (rotation basically, i want the rotation
 matrix U), such that all those vectors come in the positive quadrant.

 There will always exist such a matrix U because all the angles are less
 than 90degrees, can you get a way to find one of them.


 --
 We are here on earth to do good for others. What the others are here for, I
 don't know.

 Afroz Mohiuddin



-- 
We are here on earth to do good for others. What the others are here for, I
don't know.

Afroz Mohiuddin

-- 
You received this message because you are subscribed to the Google Groups 
Algorithm Geeks group.
To post to this group, send email to algoge...@googlegroups.com.
To unsubscribe from this group, send email to 
algogeeks+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/algogeeks?hl=en.



[algogeeks] a puzzle

2010-05-08 Thread Afroz Mohiuddin
I have a set of n-dimensional vectors, lets say, v1, v2, v3, ... vm

vi, vj means the dot product of vi and vj.

It is known that vi, vj = 0 forall i, j  i.e. the angle between any
two of them is less equal to 90 degrees, which basically means that all of
them can be occupied in a single quadrant in the n-dimensional space.
Because in any quadrant, the maximum angle between two vectors both lying
inside the quadrant is 90 degrees.

Can you come up with a orthonormal transformation U, such that Uvi = 0 ...
i.e. an orthonormal transformation (rotation basically, i want the rotation
matrix U), such that all those vectors come in the positive quadrant.

There will always exist such a matrix U because all the angles are less than
90degrees, can you get a way to find one of them.


-- 
We are here on earth to do good for others. What the others are here for, I
don't know.

Afroz Mohiuddin
Final Year Masters Student
Dept Computer Science and Engineering
Indian Institute of Technology Kanpur
Kanpur - 208016
INDIA

Address:  F-112 Hall 9
Telephone: [91]9838773891
Email: afrozena...@gmail.com
  a...@iitk.ac.in
  a...@cse.iitk.ac.in

-- 
You received this message because you are subscribed to the Google Groups 
Algorithm Geeks group.
To post to this group, send email to algoge...@googlegroups.com.
To unsubscribe from this group, send email to 
algogeeks+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/algogeeks?hl=en.



[algogeeks] Re: Complexity of Algorithms

2010-05-08 Thread scanfile

sorry for replying after a long hours.
@varun thanx for great tutorialbut still i'm confused in the
complexity concept of algorithm. I do not understand that complexity
is for the algorithms or for programs.


On May 8, 11:20 am, Ralph Boland rpbol...@gmail.com wrote:
 On May 5, 7:59 am, Varun Nagpal varun.nagp...@gmail.com wrote:

  Complexity of an algorithms is focussed on two aspects: Time it takes to
  execute the algorithm(Time Complexity) and the amount of space in memory it
  takes to store the associated data(Space Complexity). Most literature in
  computer science focuses on Time Complexity as it directly influences the
  performance of algorithm.

 For data structures there is often three complexities.
    1) Time to build the data structure.  (e.g. build a balance binary
 tree in linear time).
    2) Space required by data structure.  (e.g.  tree requires linear
 space).
    3) Time to use the data structure to gather some piece of
 information.
        (e.g. find leaf node from root node in O(log n) time.





  The complexity of an algorithm is usually based on a model of machine on
  which it  will execute. The most popular model is
  RAMhttp://en.wikipedia.org/wiki/Random_access_machineor Random
  Access Machine Model. Simple assumption of this machine model is
  that every operation(arithmetic and logic) takes unit or single step and
  each of this step takes some constant time. So by finding the number of
  steps it takes to execute the algorithm, you can find the total time it
  takes to execute the algorithm.  In this sense Unit Time or Unit Step are
  considered equivalent or synonymous. Although RAM is not accurate model of
  actual machine, but its main advantage is that it allows a machine
  independent analysis  comparison of algorithms.

  So, the Time Complexity of an algorithm is measured in terms of the total
  number of steps the algorithm takes before it terminates. It is expressed
  usually as a function of Input Size or problem size. Input size can have
  different meanings, but for simplicity you can assume it to be number of
  objects that is given as input to the algorithm(say N). An object could mean
  an integer, character etc.  Now if T(N) is the time complexity of the
  algorithm

  T(N) = Number of steps(or time) it takes to execute the algorithm.

  T(N) could be a any mathematical function like a function in constant ,
  linear multiple of N function , polynomial in N function, poly-logarithmic
   function in N, or Exponential function in N etc.

  Finding the Time Complexity of an algorithm basically involves analysis from
  three perspectives: worst case execution time, average case execution
  time and best case execution time. The algorithm will take different number
  of steps for different class of inputs or different instances of input. For
  some class of inputs, it will take least time(best case). For another class
  of inputs it will take some maximum time(worst case).

  Average case execution time analysis requires finding average(finding
  expectation in statistical terms) of the number of execution steps for each
  and every possible class of inputs.

  Best case execution time is seldom of any importance. Average case execution
  time is sometimes important but most important is Worst Case execution time
  as it tells you the upper bound on the execution time and so tells you lower
  bounds on obtainable performance.

 I tend to think average case is more important than worst case.
 Which is more important:  the average case for quicksort or the
 worst case for quicksort?
 One of the reasons once sees worst case analysis much more than
 average case analysis is that average case analysis is usually much
 harder to do, for example the worst case and average case analysis
 of quicksort.





  In Computer science though, expressing T(N) as a pure mathematical
  function is seldom given importance. More important is knowing asymptotic
  behavior of algorithm or asymptotic growth rate i.e how quickly does T(N)
  grows as N goes to a extremely large values(approaching infinity or exhibits
  asymptotic behavior).

  So instead of expressing T(N) as a pure and precise mathematical
  function, different other notations have been devised.
  As far as I know, there are at least 5 notations used to express T(N) namely
  Big-O (O), Small-o(o), Big-Omega(Ù), Small-omega(w), Theta(*È). *

  Big-O is used for representing upper bound(worst case), while Big-Omega is
  for expressing lower bounds(best case). Small or Little notations are more
  stricter notations. Theta notation is used for expressing those functions
  whose upper and lower bounds are  same or constant multiple of the same
  function

 One should be careful not to confuse upper bound and worst case (or
 lower bound
 and best case).  One can determine an upper bound on the best case
 performance
 of an algorithm and similarly determine a lower bound on the worst
 case performance!
 One can also 

Re: [algogeeks] Re: a google question

2010-05-08 Thread Jitendra Kushwaha
@Amit Agarwal

you are missing some pairs having larger some than you mentioned..
for example
7 + 49 = 56 which is greater than 53
similarly
7 + 48
7 + 47
(3 + 49 =52)   (50 +1 = 51)


---
Regards
Jitendra Kushwaha
Undergradute Student
Computer Science  Eng.
MNNIT, Allahabad

-- 
You received this message because you are subscribed to the Google Groups 
Algorithm Geeks group.
To post to this group, send email to algoge...@googlegroups.com.
To unsubscribe from this group, send email to 
algogeeks+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/algogeeks?hl=en.



Re: [algogeeks] Yahoo coding round question

2010-05-08 Thread sharad kumar
pls check cormen i think  thats most efficicent ...check under DP chapter

On Sat, May 8, 2010 at 2:30 PM, Jitendra Kushwaha
jitendra.th...@gmail.comwrote:

 Find the longest common subsequence of given N strings each having
 length between 0 to M.
 Can anybody give a good approach to the solutions

 --
 You received this message because you are subscribed to the Google Groups
 Algorithm Geeks group.
 To post to this group, send email to algoge...@googlegroups.com.
 To unsubscribe from this group, send email to
 algogeeks+unsubscr...@googlegroups.comalgogeeks%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/algogeeks?hl=en.




-- 
yezhu malai vaasa venkataramana Govinda Govinda

-- 
You received this message because you are subscribed to the Google Groups 
Algorithm Geeks group.
To post to this group, send email to algoge...@googlegroups.com.
To unsubscribe from this group, send email to 
algogeeks+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/algogeeks?hl=en.



Re: [algogeeks] tree from linked list

2010-05-08 Thread Arun prasath
This does not create a balanced tree but ensures that every element in the
tree is accessible by lg(n) time.

Time : Complexity   O(n)


[a...@91blore-srv1 ~]$ cat recursion.c
#include stdlib.h
#includeunistd.h
#include stdio.h
#define TEST2
#ifdef TEST1
int arr[] = { 1,2,3,4,5,6,7};
int max_elems = sizeof(arr)/sizeof(arr[0]);
#endif

#ifdef TEST2
int arr[] = { 1,2,3,4,5};
int max_elems = sizeof(arr)/sizeof(arr[0]);
#endif

#ifdef TEST3
int arr[] = { 1,2,3,4,5,6,7,8};
int max_elems = sizeof(arr)/sizeof(arr[0]);
#endif

#define LIST_EMPTY -1

struct tree {
int data;
struct tree * left,* right;
};

struct tree* function( int , int);
void print_inorder( struct tree *);

int return_next_from_list(void)
{
static int nxt_elem = 0;
if(nxt_elem  max_elems)
return arr[nxt_elem++];

return LIST_EMPTY;// empty condition
}
int main()
{
unsigned int  x = max_elems;
struct tree* head;
while( x  (x - 1) ) {
x = x  (x - 1) ;
}

head = function(0, x);
print_inorder(head);
free(head);
return 0;
}
struct tree* function(int mid, int i)
{
int val = mid + i ;
if (val  1) {
struct tree * leaf = malloc( sizeof(struct tree) );
leaf-left = leaf-right = NULL;
leaf-data = return_next_from_list();
if(leaf-data == LIST_EMPTY) {
free(leaf);
return NULL;
}
return leaf;
}
struct tree *non_leaf = malloc( sizeof(struct tree) ) ;


non_leaf-left  = function( mid, i/2);
non_leaf-data = return_next_from_list();
if (non_leaf-data == LIST_EMPTY) {
struct tree *tmp = non_leaf-left;
free(non_leaf);
return tmp;
}
non_leaf-right = function( mid+i, i/2);
return non_leaf;
}
void print_inorder( struct tree* root)
{
struct tree * trav = root;
if (!trav) {
return;
}
print_inorder(trav-left);
if(trav-left)
free(trav-left);
printf({%d}, trav-data);
print_inorder(trav-right);
if(trav-right)
free(trav-right);

}
[a...@91blore-srv1 ~]$


On Sun, May 2, 2010 at 6:38 PM, divya sweetdivya@gmail.com wrote:

 u are given a sorted lnked list construct a balanced binary search
 tree from it.

 --
 You received this message because you are subscribed to the Google Groups
 Algorithm Geeks group.
 To post to this group, send email to algoge...@googlegroups.com.
 To unsubscribe from this group, send email to
 algogeeks+unsubscr...@googlegroups.comalgogeeks%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/algogeeks?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Algorithm Geeks group.
To post to this group, send email to algoge...@googlegroups.com.
To unsubscribe from this group, send email to 
algogeeks+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/algogeeks?hl=en.



Re: [algogeeks] Yahoo coding round question

2010-05-08 Thread Mario Ynocente Castro
I don't think DP is a good strategy for the N strings case

2010/5/8 sharad kumar aryansmit3...@gmail.com

 pls check cormen i think  thats most efficicent ...check under DP chapter

 On Sat, May 8, 2010 at 2:30 PM, Jitendra Kushwaha 
 jitendra.th...@gmail.com wrote:

 Find the longest common subsequence of given N strings each having
 length between 0 to M.
 Can anybody give a good approach to the solutions

 --
 You received this message because you are subscribed to the Google Groups
 Algorithm Geeks group.
 To post to this group, send email to algoge...@googlegroups.com.
 To unsubscribe from this group, send email to
 algogeeks+unsubscr...@googlegroups.comalgogeeks%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/algogeeks?hl=en.




 --
 yezhu malai vaasa venkataramana Govinda Govinda

  --
 You received this message because you are subscribed to the Google Groups
 Algorithm Geeks group.
 To post to this group, send email to algoge...@googlegroups.com.
 To unsubscribe from this group, send email to
 algogeeks+unsubscr...@googlegroups.comalgogeeks%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/algogeeks?hl=en.




-- 
Mario Ynocente Castro
Undergraduate Student of System Engineering
National University of Engineering, Peru

http://sites.google.com/site/ycmario

-- 
You received this message because you are subscribed to the Google Groups 
Algorithm Geeks group.
To post to this group, send email to algoge...@googlegroups.com.
To unsubscribe from this group, send email to 
algogeeks+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/algogeeks?hl=en.