Great, thank you!   It's great to have some more benchmarks.

Simon, will you apply?

Simon

From: cvs-ghc-boun...@haskell.org [mailto:cvs-ghc-boun...@haskell.org] On 
Behalf Of David Peixotto
Sent: 18 November 2010 23:12
To: cvs-ghc@haskell.org
Subject: Adding the fibon benchmarks to nofib

I am pleased to announce that I have finished porting the [fibon
benchmarks][1] into nofib. I added the benchmarks to a new subdirectory in the
nofib repository. The benchmarks are available as a single patch from this
temporary darcs repository:

    http://www.cs.rice.edu/~dmp4866/darcs/nofib

This repository should only differ from the nofib head by this one patch. I
ran the benchmarks against a recent GHC HEAD on a 32-bit mac and a 64-bit
linux box. All the benchmarks ran successfully for me.

I did not modify any files in the nofib repository other than adding the fibon
subdirectory. The benchmarks can be run using the standard make rules:

    $ make NoFibSubDirs=fibon boot && make NoFibSubDirs=fibon

Some of the fibon benchmarks take a while to run, so I wasn't sure if people
would want them enabled by default. A full compile and run of the benchmarks
(one iteration and two "ways" per benchmark) took 27 minutes on the mac and 43
minutes on the linux system.

Please let me know if there are comments or issues with this patch.

Benchmark Contents
==================

There are a total of 34 benchmarks divided into four subgroups. The DPH, Repa,
and Shootout benchmarks are available elsewhere, but I went ahead and added
them since they are part of the fibon suite and it might be nice to have a
version of them easily available for use by GHC developers.

As far as I know, the Hackage benchmarks are not available (as benchmarks)
anywhere else.

  * Dph (4)
      Programs taken from the DPH library benchmarks.

        Benchmarks:
          Dotp Qsort QuickHull Sumsq

  * Hackage (18)

      Benchmarks include programs and libraries that have been uploaded to
      hackage. Any package dependencies that are not included as boot packages
      have been included in the benchmark as source files.

        Benchmarks:
          Agum Bzlib Cpsa Crypto Fgl Fst Funsat Gf HaLeX
          Happy Hgalib Palindromes Pappy QuickCheck Regex Simgi
          TernaryTrees Xsact

  * Repa (5)

    Programs are taken from the examples in the Repa library. A copy of the
    Repa library is included in the _RepaLib directory. It is simply a clone
    of the Repa darcs repository.

        Benchmarks:
          Blur FFT2d FFT3d Laplace MMult

  * Shootout (7)
      Programs written for the [Computer Language Benchmarks Game][2].

        Benchmarks:
          BinaryTrees ChameneosRedux Fannkuch Mandelbrot Nbody
          Pidigits SpectralNorm

Open Issues
===========

  1. HSC2HS_INPLACE does not get set correctly for nofib Makefiles

   Two of the fibon benchmarks have .hsc files that need to be processed by
  `hsc2hs`. The HSC2HS_INPLACE variable does not get set correctly inside the
  makefiles so the processing fails unless I add this to the Makefile:

   HSC2HS_INPLACE=$(GHC_TOP)/$(INPLACE_BIN)/$(GHC_HSC2HS_PGM)

  2. Can not check validity of non `stdout` outputs

    Several benchmarks produce output to named files instead of stdout. As far
    as I can tell the `runstdtest` script cannot check an output file for
    validity. The program exit code is still being checked, so major faults
    will be detected.

  3. `nofib-analyse` can only read binary size results

    While not specific to the fibon benchmarks I thought I would mention it
    here. I'm not able to get nofib-analyse to read any data from a nofib run
    log except the program binary size. This happens for the fibon as well as
    the existing benchmarks.

Comparison with the Fibon Benchmark Tools
==========================================

The [fibon package][3] includes tools for running and analyzing benchmarks.
There was a brief discussion off-list about moving nofib to use the fibon
tools. I thought that it would be quicker to port the fibon benchmarks into
the nofib suite (there are fewer fibon benchmarks and I could automate part of
the process). Also, people are already comfortable the using nofib tools.

My understanding is that the nofib infrastructure is considered to be somewhat
"legacy". If there is a desire to move to something new, I think the fibon
infrastructure could be a viable alternative. The data collected by fibon and
the analysis done is very similar to `nofib-analyse`.

Some major differences include:

  * Sandbox vs. Inplace run directory

      Nofib builds and run each benchmark in place, while fibon uses a new
      sandbox directory for each build/run. With an in-place build you have to
      clean between each run with different settings. With the sandbox you
      must rebuild the code each time. Rebuilding each time can be time
      consuming if your settings do not change (this particularly shows up
      with the Repa benchmarks that each rebuild parts of the repa library).

  * Make-based vs. Cabal based build

      Nofib uses make to handle all the builds while fibon uses cabal.

  * Top level vs. Inplace configuration

      Nofib configuration is done by changing settings in the makefiles.
      Fibon uses a top level config file that allows for default settings to
      be easily overwritten for specific benchmarks. It is easy to have
      multiple standard configurations available to choose from at run time.

  * GHC integration

      Nofib is very well integrated with the GHC build system. It picks up
      settings from the standard make files and can reuse the standard GHC
      targets.

[1]: https://github.com/dmpots/fibon-benchmarks
[2]: http://shootout.alioth.debian.org
[3]: https://github.com/dmpots/fibon

_______________________________________________
Cvs-ghc mailing list
Cvs-ghc@haskell.org
http://www.haskell.org/mailman/listinfo/cvs-ghc

Reply via email to