I agree. I find compilation time on things with large data structures, such as working with the GHC AST via the GHC API get pretty slow.
To the point where I have had to explicitly disable optimisation on HaRe, otherwise the build takes too long. Alan On Sun, Dec 4, 2016 at 9:47 PM, Michal Terepeta <michal.terep...@gmail.com> wrote: > Hi everyone, > > I've been running nofib a few times recently to see the effect of some > changes > on compile time (not the runtime of the compiled program). And I've started > wondering how representative nofib is when it comes to measuring compile > time > and compiler allocations? It seems that most of the nofib programs compile > really quickly... > > Is there some collections of modules/libraries/applications that were put > together with the purpose of benchmarking GHC itself and I just haven't > seen/found it? > > If not, maybe we should create something? IMHO it sounds reasonable to have > separate benchmarks for: > - Performance of GHC itself. > - Performance of the code generated by GHC. > > Thanks, > Michal > > > _______________________________________________ > ghc-devs mailing list > ghc-devs@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > >
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs