On Wed, Oct 24, 2001 at 06:29:39PM -0500, Ian Patrick Thomas wrote: > Is there a way to get a more accurate time value, preferably to a > nanosecond, using the system clock? I need to write a program that needs to > output the time it takes various sorting algprithms to sort various numbers > of integers. For smaller numbers of integers on algorithms like quicksort, > and mergesort, I am getting 0 for the runtime. I need a function that I can > call before the sort begins and after it ends that would return a time > value, preferably to the nanosecond. Is there a library out there that can > do this or is the answer right on my machine and I just haven't found it > yet?
The standard way of handling this is to use a large data set and shuffle/re-sort it many (like, say, 10,000) times. In addition to making the time more easily measurable, this also avoids the possibility of your test returning incorrect results because of anomalies in the data's original order. (e.g., if your shuffled data just happens to be in order already, a bubblesort is likely to be faster than a quicksort.) If you are required to use a small data set, just perform more repetitions to get measurable data. -- When we reduce our own liberties to stop terrorism, the terrorists have already won. - reverius Innocence is no protection when governments go bad. - Mr. Slippery