On 05/02/13 23:48, Johan Tibell wrote:
I've now added the shootout programs that could be added without
modifying the programs itself. I described why some programs weren't
added in nofib/shootout/README.

For the curious, here's the change in these benchmarks from 7.0.4 to 7.6.2:

--------------------------------------------------------------------------------
         Program           Size    Allocs   Runtime   Elapsed  TotalMem
--------------------------------------------------------------------------------
    binary-trees          +2.6%     -0.6%     -2.8%     -2.8%    -22.3%
  fannkuch-redux          +1.4%+11514445.     +0.2%     +0.2%     +0.0%
          n-body          +3.8%     +0.0%     +4.4%     +4.4%     +0.0%
        pidigits          +2.2%     -6.9%     -1.7%     -1.2%    -20.0%
   spectral-norm          +2.1%    -61.3%    -54.8%    -54.8%     +0.0%
--------------------------------------------------------------------------------
             Min          +1.4%    -61.3%    -54.8%    -54.8%    -22.3%
             Max          +3.8%+11514445.     +4.4%     +4.4%     +0.0%
  Geometric Mean          +2.4%   +737.6%    -14.7%    -14.6%     -9.1%

This is slightly off topic, but I wanted to plant this thought in people's brains: we shouldn't place much significance in the average of a bunch of benchmarks (even the geometric mean), because it assumes that the benchmarks have a sensible distribution, and we have no reason to expect that to be the case. For example, in the results above, we wouldn't expect a 14.7% reduction in runtime to be seen in a typical program.

Using the median might be slightly more useful, which here would be something around 0% for runtime, though still technically dodgy. When I get around to it I'll modify nofib-analyse to report medians instead of GMs.

Cheers,
        Simon


_______________________________________________
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs

Reply via email to