On Thu, Dec 04, 2003 at 10:33:14PM +0000, Fergal Daly wrote:
> > Another useful sort of test would be "make sure this function runs in less
> > than N perlmips time" where a perlmip is some unit of CPU time calibrated
> > relative to the current hardware.  So a pmip on machine A would be
> > roughly twice as long as a pmip on a machine that's twice as fast.
> > This enables us to test "make sure this isn't too slow".
> 
> Not so yeah - just like the mip, the pmip would be a bit to elusive and ever 
> changing for this to work quite as well as we'd like.

I dunno about that.  It doesn't have to be terribly accurate.  Its useful
for situations where someone finds a condition that makes your code run
reeeeeeeeeeeeeeeeeeally slow.

Calibration would be pretty straight forward.  Just have a handful of
loops to run known snippets of Perl and calibrate the pmip based on how long
they take to run.  This would change from Perl version to Perl version
and platform to platform, but you can find something that's not off by more 
than 25%.


> Anyway to do these you can do
> my $res = timethese(10000, {a => $a_code, b => $b_code}, "none");
> 
> which will produce no output and $res will contain all the benchmark 
> information and you can then perform whatever tests you like on it.

Trick is, these are for tests where you're not comparing two pieces of
code.  You want to make sure a single piece of code isn't taking too long.
Again, its a rather coarse grained test.


> is_n_times_faster(10000, 5, $XS_code, $perl_code, "XS is 5 times faster")
> 
> But I don't think that was what Jim wanted, it seemed like he was trying to 
> display benchmark info purely for informational purposes,

For that I'd recommend doing what DBI does and putting the benchmarking
code into test.pl.


-- 
Michael G Schwern        [EMAIL PROTECTED]  http://www.pobox.com/~schwern/
Sometimes you eat the path, sometimes the path eats you.

Reply via email to