On Monday, January 6, 2003, at 07:54 AM, Glasgow, David wrote:

It may be that things have moved along a bit, or I am getting something wrong ....I'd be interested in any comments.
Computers and timers are a lot more accurate.

It is true that interrupts and system threads are running all the time and the time you measure will often be higher than what you want to look at (in some sense).

If you are timing a one second operation you _will_ be including some system overhead in measurement. The precision timing may not mean much here. If you take lots of measurements and average those you will have the effective time in that environment, hardware and OS.

If you are measuring very short times you need the precision. In this case, you can average. Make sure you throw out the outliers. Or you can take the minimum of several time trials. This tends to remove the effect of the OS, so you cannot compare to the times from the longer time method above. This does allow you to make some decisions in coding style.

The minimum method may provide a useful measure for long periods also, depending on your use, but it will always include some system overhead time.

In my method for measuring short times, I use the same command for getting the time before and after the operation to be measured. I also use that for measuring doing nothing, effectively measuring the time to make a measurement. I subtract that from my results.

Dar Scott



_______________________________________________
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Reply via email to