Simon Marlow wrote:
Nobench does already collect code size, but does not yet display it in
the results table. I specifically want to collect compile time as well.
Not sure what the best way to measure allocation and peak memory use
are?
With GHC you need to use "+RTS -s" and then slurp in the <prog>.stat
file. You can also get allocations, peak memory use, and separate
mutator/GC times this way.
Oh, and one more thing. We have this program called nofib-analyse in GHC's
source tree:
http://darcs.haskell.org/ghc/utils/nofib-analyse
which takes the output from a couple of nofib runs and generates nice
tables, in ASCII or LaTeX (for including in papers, see e.g. our
pointer-tagging paper from ICFP'07). The only reason we haven't switched
to using nobench for GHC is the existence of this tool. Unfortuantely it
relies on specifics of the output generated by a nofib run, and uses a Perl
script, etc. etc. The point is, it needs some non-trivial porting.
I'm pointing this out just in case you or anyone else felt enthusiastic
enough to port this to nobench, and to hopefully head off any duplication
of effort. Failing that, I'll probably get around to porting it myself at
some point.
Cheers,
Simon
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe