[algogeeks] Re: Complexity of Algorithms [√n and l og (n^2)]

2010-07-31 Thread sourav sain
A small correction.
You need to prove if f(n) = O(g(n)).

My Proff (under Note) is for f(n) = Ω(g(n))

On Sat, Jul 31, 2010 at 12:08 AM, sourav souravs...@gmail.com wrote:

 f(n) = sqrt(n) [squate root of n]
 g(n) = log(^2) [log of (n square)]

 For the above pair of functions is f(n) = Ω(g(n))? i.e., is there some
 c  0, such that f(n) = g(n) for all n? Give proof in case answer is
 yes or no.

 ---
 Note: f(n) = O(g(n)) is proved as below. Need to find if f(n) = Ω(g(n)
 also.
 Let a = √(n), then log a = 1/2(log n)
 As logarithm of a number is smaller than the number, we have
  a  log a
 =  √n  1/2(log n)
 = √n  2/4(log n)
 = √n  1/2(log n^2)
 Hence √n is  log (n^2) for c = 1/4

-- 
You received this message because you are subscribed to the Google Groups 
Algorithm Geeks group.
To post to this group, send email to algoge...@googlegroups.com.
To unsubscribe from this group, send email to 
algogeeks+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/algogeeks?hl=en.



Re: [algogeeks] Re: Complexity of Algorithms

2010-05-12 Thread Varun Nagpal
A program is just an implementation of an algorithm. You may use any
language to implement an algorithm as a program. To make time and space
complexity analysis independent of language or computing platform, we relate
them with algorithm. This is also useful when you need to compare different
solutions to same problem without bogging down by programming language
constructs . That is why its a good practice to write algorithms in pseudo
programming language and do the complexity analysis and then perform
comparison, otherwise its simply impossible to do complexity analysis of all
possible implementations of the algorithm.

Book by Thomas cormen is bible of algorithms, but is too big and not easy to
read. The other book that I had suggested has possibly the best possible
explanations of concepts at undergraduate level. I havent come across any
other books better then these two. There is one more book which
is slightly more basic and is easier to read : Analysis of Algorithms, Jones
Barlett Publications




On Sat, May 8, 2010 at 5:18 PM, scanfile rahul08k...@gmail.com wrote:


 sorry for replying after a long hours.
 @varun thanx for great tutorialbut still i'm confused in the
 complexity concept of algorithm. I do not understand that complexity
 is for the algorithms or for programs.


 On May 8, 11:20 am, Ralph Boland rpbol...@gmail.com wrote:
  On May 5, 7:59 am, Varun Nagpal varun.nagp...@gmail.com wrote:
 
   Complexity of an algorithms is focussed on two aspects: Time it takes
 to
   execute the algorithm(Time Complexity) and the amount of space in
 memory it
   takes to store the associated data(Space Complexity). Most literature
 in
   computer science focuses on Time Complexity as it directly influences
 the
   performance of algorithm.
 
  For data structures there is often three complexities.
 1) Time to build the data structure.  (e.g. build a balance binary
  tree in linear time).
 2) Space required by data structure.  (e.g.  tree requires linear
  space).
 3) Time to use the data structure to gather some piece of
  information.
 (e.g. find leaf node from root node in O(log n) time.
 
 
 
 
 
   The complexity of an algorithm is usually based on a model of machine
 on
   which it  will execute. The most popular model is
   RAMhttp://en.wikipedia.org/wiki/Random_access_machineor Random
   Access Machine Model. Simple assumption of this machine model is
   that every operation(arithmetic and logic) takes unit or single step
 and
   each of this step takes some constant time. So by finding the number of
   steps it takes to execute the algorithm, you can find the total time it
   takes to execute the algorithm.  In this sense Unit Time or Unit Step
 are
   considered equivalent or synonymous. Although RAM is not accurate model
 of
   actual machine, but its main advantage is that it allows a machine
   independent analysis  comparison of algorithms.
 
   So, the Time Complexity of an algorithm is measured in terms of the
 total
   number of steps the algorithm takes before it terminates. It is
 expressed
   usually as a function of Input Size or problem size. Input size can
 have
   different meanings, but for simplicity you can assume it to be number
 of
   objects that is given as input to the algorithm(say N). An object could
 mean
   an integer, character etc.  Now if T(N) is the time complexity of the
   algorithm
 
   T(N) = Number of steps(or time) it takes to execute the algorithm.
 
   T(N) could be a any mathematical function like a function in constant ,
   linear multiple of N function , polynomial in N function,
 poly-logarithmic
function in N, or Exponential function in N etc.
 
   Finding the Time Complexity of an algorithm basically involves analysis
 from
   three perspectives: worst case execution time, average case execution
   time and best case execution time. The algorithm will take different
 number
   of steps for different class of inputs or different instances of input.
 For
   some class of inputs, it will take least time(best case). For another
 class
   of inputs it will take some maximum time(worst case).
 
   Average case execution time analysis requires finding average(finding
   expectation in statistical terms) of the number of execution steps for
 each
   and every possible class of inputs.
 
   Best case execution time is seldom of any importance. Average case
 execution
   time is sometimes important but most important is Worst Case execution
 time
   as it tells you the upper bound on the execution time and so tells you
 lower
   bounds on obtainable performance.
 
  I tend to think average case is more important than worst case.
  Which is more important:  the average case for quicksort or the
  worst case for quicksort?
  One of the reasons once sees worst case analysis much more than
  average case analysis is that average case analysis is usually much
  harder to do, for example the worst case and average case analysis
  of 

[algogeeks] Re: Complexity of Algorithms

2010-05-08 Thread Ralph Boland


On May 5, 7:59 am, Varun Nagpal varun.nagp...@gmail.com wrote:
 Complexity of an algorithms is focussed on two aspects: Time it takes to
 execute the algorithm(Time Complexity) and the amount of space in memory it
 takes to store the associated data(Space Complexity). Most literature in
 computer science focuses on Time Complexity as it directly influences the
 performance of algorithm.

For data structures there is often three complexities.
   1) Time to build the data structure.  (e.g. build a balance binary
tree in linear time).
   2) Space required by data structure.  (e.g.  tree requires linear
space).
   3) Time to use the data structure to gather some piece of
information.
   (e.g. find leaf node from root node in O(log n) time.

 The complexity of an algorithm is usually based on a model of machine on
 which it  will execute. The most popular model is
 RAMhttp://en.wikipedia.org/wiki/Random_access_machineor Random
 Access Machine Model. Simple assumption of this machine model is
 that every operation(arithmetic and logic) takes unit or single step and
 each of this step takes some constant time. So by finding the number of
 steps it takes to execute the algorithm, you can find the total time it
 takes to execute the algorithm.  In this sense Unit Time or Unit Step are
 considered equivalent or synonymous. Although RAM is not accurate model of
 actual machine, but its main advantage is that it allows a machine
 independent analysis  comparison of algorithms.

 So, the Time Complexity of an algorithm is measured in terms of the total
 number of steps the algorithm takes before it terminates. It is expressed
 usually as a function of Input Size or problem size. Input size can have
 different meanings, but for simplicity you can assume it to be number of
 objects that is given as input to the algorithm(say N). An object could mean
 an integer, character etc.  Now if T(N) is the time complexity of the
 algorithm

 T(N) = Number of steps(or time) it takes to execute the algorithm.

 T(N) could be a any mathematical function like a function in constant ,
 linear multiple of N function , polynomial in N function, poly-logarithmic
  function in N, or Exponential function in N etc.

 Finding the Time Complexity of an algorithm basically involves analysis from
 three perspectives: worst case execution time, average case execution
 time and best case execution time. The algorithm will take different number
 of steps for different class of inputs or different instances of input. For
 some class of inputs, it will take least time(best case). For another class
 of inputs it will take some maximum time(worst case).

 Average case execution time analysis requires finding average(finding
 expectation in statistical terms) of the number of execution steps for each
 and every possible class of inputs.

 Best case execution time is seldom of any importance. Average case execution
 time is sometimes important but most important is Worst Case execution time
 as it tells you the upper bound on the execution time and so tells you lower
 bounds on obtainable performance.

I tend to think average case is more important than worst case.
Which is more important:  the average case for quicksort or the
worst case for quicksort?
One of the reasons once sees worst case analysis much more than
average case analysis is that average case analysis is usually much
harder to do, for example the worst case and average case analysis
of quicksort.


 In Computer science though, expressing T(N) as a pure mathematical
 function is seldom given importance. More important is knowing asymptotic
 behavior of algorithm or asymptotic growth rate i.e how quickly does T(N)
 grows as N goes to a extremely large values(approaching infinity or exhibits
 asymptotic behavior).

 So instead of expressing T(N) as a pure and precise mathematical
 function, different other notations have been devised.
 As far as I know, there are at least 5 notations used to express T(N) namely
 Big-O (O), Small-o(o), Big-Omega(Ω), Small-omega(w), Theta(*Θ). *

 Big-O is used for representing upper bound(worst case), while Big-Omega is
 for expressing lower bounds(best case). Small or Little notations are more
 stricter notations. Theta notation is used for expressing those functions
 whose upper and lower bounds are  same or constant multiple of the same
 function


One should be careful not to confuse upper bound and worst case (or
lower bound
and best case).  One can determine an upper bound on the best case
performance
of an algorithm and similarly determine a lower bound on the worst
case performance!
One can also determine an upper bound and lower bound on the average
case
performance.  If these are the same then theta notation can be used
to describe average case performance.
For example a lower bound on the average case performance of quicksort
is omega(n)
and an upper bound on the average case performance  is O(n * n).
Of course, if you are smart, you can 

[algogeeks] Re: Complexity of Algorithms

2010-05-08 Thread scanfile

sorry for replying after a long hours.
@varun thanx for great tutorialbut still i'm confused in the
complexity concept of algorithm. I do not understand that complexity
is for the algorithms or for programs.


On May 8, 11:20 am, Ralph Boland rpbol...@gmail.com wrote:
 On May 5, 7:59 am, Varun Nagpal varun.nagp...@gmail.com wrote:

  Complexity of an algorithms is focussed on two aspects: Time it takes to
  execute the algorithm(Time Complexity) and the amount of space in memory it
  takes to store the associated data(Space Complexity). Most literature in
  computer science focuses on Time Complexity as it directly influences the
  performance of algorithm.

 For data structures there is often three complexities.
    1) Time to build the data structure.  (e.g. build a balance binary
 tree in linear time).
    2) Space required by data structure.  (e.g.  tree requires linear
 space).
    3) Time to use the data structure to gather some piece of
 information.
        (e.g. find leaf node from root node in O(log n) time.





  The complexity of an algorithm is usually based on a model of machine on
  which it  will execute. The most popular model is
  RAMhttp://en.wikipedia.org/wiki/Random_access_machineor Random
  Access Machine Model. Simple assumption of this machine model is
  that every operation(arithmetic and logic) takes unit or single step and
  each of this step takes some constant time. So by finding the number of
  steps it takes to execute the algorithm, you can find the total time it
  takes to execute the algorithm.  In this sense Unit Time or Unit Step are
  considered equivalent or synonymous. Although RAM is not accurate model of
  actual machine, but its main advantage is that it allows a machine
  independent analysis  comparison of algorithms.

  So, the Time Complexity of an algorithm is measured in terms of the total
  number of steps the algorithm takes before it terminates. It is expressed
  usually as a function of Input Size or problem size. Input size can have
  different meanings, but for simplicity you can assume it to be number of
  objects that is given as input to the algorithm(say N). An object could mean
  an integer, character etc.  Now if T(N) is the time complexity of the
  algorithm

  T(N) = Number of steps(or time) it takes to execute the algorithm.

  T(N) could be a any mathematical function like a function in constant ,
  linear multiple of N function , polynomial in N function, poly-logarithmic
   function in N, or Exponential function in N etc.

  Finding the Time Complexity of an algorithm basically involves analysis from
  three perspectives: worst case execution time, average case execution
  time and best case execution time. The algorithm will take different number
  of steps for different class of inputs or different instances of input. For
  some class of inputs, it will take least time(best case). For another class
  of inputs it will take some maximum time(worst case).

  Average case execution time analysis requires finding average(finding
  expectation in statistical terms) of the number of execution steps for each
  and every possible class of inputs.

  Best case execution time is seldom of any importance. Average case execution
  time is sometimes important but most important is Worst Case execution time
  as it tells you the upper bound on the execution time and so tells you lower
  bounds on obtainable performance.

 I tend to think average case is more important than worst case.
 Which is more important:  the average case for quicksort or the
 worst case for quicksort?
 One of the reasons once sees worst case analysis much more than
 average case analysis is that average case analysis is usually much
 harder to do, for example the worst case and average case analysis
 of quicksort.





  In Computer science though, expressing T(N) as a pure mathematical
  function is seldom given importance. More important is knowing asymptotic
  behavior of algorithm or asymptotic growth rate i.e how quickly does T(N)
  grows as N goes to a extremely large values(approaching infinity or exhibits
  asymptotic behavior).

  So instead of expressing T(N) as a pure and precise mathematical
  function, different other notations have been devised.
  As far as I know, there are at least 5 notations used to express T(N) namely
  Big-O (O), Small-o(o), Big-Omega(Ù), Small-omega(w), Theta(*È). *

  Big-O is used for representing upper bound(worst case), while Big-Omega is
  for expressing lower bounds(best case). Small or Little notations are more
  stricter notations. Theta notation is used for expressing those functions
  whose upper and lower bounds are  same or constant multiple of the same
  function

 One should be careful not to confuse upper bound and worst case (or
 lower bound
 and best case).  One can determine an upper bound on the best case
 performance
 of an algorithm and similarly determine a lower bound on the worst
 case performance!
 One can also