OK, they just run for a few seconds, so, any suggestions are welcome
(in fact needed)

/dev/bintime

Wouldn't many rounds of running with different data sets (to minimize cache hits) and timing with time also serve the same purpose? Does time introduce some unavoidable minimum margin of error for short runs that will get multiplied by the number of times and drown the actual measurement? If time only introduces fluctuations they will cancel each other out in many runs but if there's an unavoidable minimum error then multiple runs won't help.

--On Wednesday, April 15, 2009 8:22 AM -0700 ron minnich <rminn...@gmail.com> wrote:

On Wed, Apr 15, 2009 at 8:18 AM, hugo rivera <uai...@gmail.com> wrote:
seems reasonable to me, I assume you are looking at data consumption
only?

well, I am not really sure what you mean. Data consumption? ;-)

sorry. Memory data footprint. Not code + memory. This all depends on
lots of factors, but for code remember that text is shared.


OK, they just run for a few seconds, so, any suggestions are welcome
(in fact needed)

/dev/bintime

ron


Reply via email to