On May 5, 7:59 am, Varun Nagpal <varun.nagp...@gmail.com> wrote:
> Complexity of an algorithms is focussed on two aspects: Time it takes to
> execute the algorithm(Time Complexity) and the amount of space in memory it
> takes to store the associated data(Space Complexity). Most literature in
> computer science focuses on Time Complexity as it directly influences the
> performance of algorithm.

For data structures there is often three complexities.
   1) Time to build the data structure.  (e.g. build a balance binary
tree in linear time).
   2) Space required by data structure.  (e.g.  tree requires linear
space).
   3) Time to use the data structure to gather some piece of
information.
       (e.g. find leaf node from root node in O(log n) time.
>
> The complexity of an algorithm is usually based on a model of machine on
> which it  will execute. The most popular model is
> RAM<http://en.wikipedia.org/wiki/Random_access_machine>or Random
> Access Machine Model. Simple assumption of this machine model is
> that every operation(arithmetic and logic) takes unit or single step and
> each of this step takes some constant time. So by finding the number of
> steps it takes to execute the algorithm, you can find the total time it
> takes to execute the algorithm.  In this sense Unit Time or Unit Step are
> considered equivalent or synonymous. Although RAM is not accurate model of
> actual machine, but its main advantage is that it allows a machine
> independent analysis & comparison of algorithms.
>
> So, the Time Complexity of an algorithm is measured in terms of the total
> number of steps the algorithm takes before it terminates. It is expressed
> usually as a function of Input Size or problem size. Input size can have
> different meanings, but for simplicity you can assume it to be number of
> objects that is given as input to the algorithm(say N). An object could mean
> an integer, character etc.  Now if T(N) is the time complexity of the
> algorithm
>
> T(N) = Number of steps(or time) it takes to execute the algorithm.
>
> T(N) could be a any mathematical function like a function in constant ,
> linear multiple of N function , polynomial in N function, poly-logarithmic
>  function in N, or Exponential function in N etc.
>
> Finding the Time Complexity of an algorithm basically involves analysis from
> three perspectives: worst case execution time, average case execution
> time and best case execution time. The algorithm will take different number
> of steps for different class of inputs or different instances of input. For
> some class of inputs, it will take least time(best case). For another class
> of inputs it will take some maximum time(worst case).
>
> Average case execution time analysis requires finding average(finding
> expectation in statistical terms) of the number of execution steps for each
> and every possible class of inputs.
>
> Best case execution time is seldom of any importance. Average case execution
> time is sometimes important but most important is Worst Case execution time
> as it tells you the upper bound on the execution time and so tells you lower
> bounds on obtainable performance.

I tend to think average case is more important than worst case.
Which is more important:  the average case for quicksort or the
worst case for quicksort?
One of the reasons once sees worst case analysis much more than
average case analysis is that average case analysis is usually much
harder to do, for example the worst case and average case analysis
of quicksort.

>
> In Computer science though, expressing T(N) as a pure mathematical
> function is seldom given importance. More important is knowing asymptotic
> behavior of algorithm or asymptotic growth rate i.e how quickly does T(N)
> grows as N goes to a extremely large values(approaching infinity or exhibits
> asymptotic behavior).
>
> So instead of expressing T(N) as a pure and precise mathematical
> function, different other notations have been devised.
> As far as I know, there are at least 5 notations used to express T(N) namely
> Big-O ("O"), Small-o("o"), Big-Omega("Ω"), Small-omega("w"), Theta("*Θ"). *
>
> Big-O is used for representing upper bound(worst case), while Big-Omega is
> for expressing lower bounds(best case). Small or Little notations are more
> stricter notations. Theta notation is used for expressing those functions
> whose upper and lower bounds are  same or constant multiple of the same
> function
>

One should be careful not to confuse upper bound and worst case (or
lower bound
and best case).  One can determine an upper bound on the best case
performance
of an algorithm and similarly determine a lower bound on the worst
case performance!
One can also determine an upper bound and lower bound on the average
case
performance.  If these are the same then theta notation can be used
to describe average case performance.
For example a lower bound on the average case performance of quicksort
is omega(n)
and an upper bound on the average case performance  is O(n * n).
Of course, if you are smart, you can prove that the average case
performance of
quicksort is theta(n * log(n)).

> ...

Regards,

Ralph Boland

-- 
You received this message because you are subscribed to the Google Groups 
"Algorithm Geeks" group.
To post to this group, send email to algoge...@googlegroups.com.
To unsubscribe from this group, send email to 
algogeeks+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/algogeeks?hl=en.

Reply via email to