Guy Hulbert <gwhulb...@eol.ca> writes:
> I am interested first in developing a generic framework around the
> work already done for 'the benchmark game' (TBG*).  I will pretend
> that I am starting from scratch and define a protocol for adding
> algorithms and exchanging information.
>
> I have been convinced that everything following has been done for
> TBG but some of it is obscure.  The details are hidden in CVS.
>
> I'd like to set things up so everything is fully automated.  Perl6
> developers (and users :-) should be able to just run the benchmarks
> in a "reasonable way" (one which halts :-) after installing the
> latest rakudo release.

Just a hint from my Perl5 benchmarking effort: I had to take quite
some care to make benchmarking numbers stable. With every newly added
library the numbers changed which made it difficult to have useful
numbers over the lifetime of the benchmark suite itself.

Feel free to look into

  
http://cpansearch.perl.org/src/SCHWIGON/Benchmark-Perl-Formance-0.22/lib/Benchmark/Perl/Formance.pm

and

  http://cpansearch.perl.org/src/SCHWIGON/Benchmark-Perl-Formance-0.22/ChangeLog

especially around v0.20 to get some inspiration of why, how and when I
forked plugins away, what I set to make a typical Linux system stable,
etc., etc.

It's all Perl5, but the collected ideas flew in from more general
sources.

If on the other side, you already have some more experiences on this
topic I would love to hear comments and ideas to apply them to my
Perl5 benchmarking.

Kind regards,
Steffen
-- 
Steffen Schwigon <s...@renormalist.net>
Dresden Perl Mongers <http://dresden-pm.org/>

Reply via email to