Re: Measuring performance of GHC
Seems like a good idea, for sure. I have not, but I might eventually. On 4 Dec 2016 21:52, "Joachim Breitner"wrote: > Hi, > > did you try to compile it with a profiled GHC and look at the report? I > would not be surprised if it would point to some obvious sub-optimal > algorithms in GHC. > > Greetings, > Joachim > > Am Sonntag, den 04.12.2016, 20:04 + schrieb David Turner: > > Nod nod. > > > > amazonka-ec2 has a particularly painful module containing just a > > couple of hundred type definitions and associated instances and > > stuff. None of the types is enormous. There's an issue open on > > GitHub[1] where I've guessed at some possible better ways of > > splitting the types up to make GHC's life easier, but it'd be great > > if it didn't need any such shenanigans. It's a bit of a pathological > > case: auto-generated 15kLoC and lots of deriving, but I still feel it > > should be possible to compile with less than 2.8GB RSS. > > > > [1] https://github.com/brendanhay/amazonka/issues/304 > > > > Cheers, > > > > David > > > > On 4 Dec 2016 19:51, "Alan & Kim Zimmerman" > > wrote: > > I agree. > > > > I find compilation time on things with large data structures, such as > > working with the GHC AST via the GHC API get pretty slow. > > > > To the point where I have had to explicitly disable optimisation on > > HaRe, otherwise the build takes too long. > > > > Alan > > > > > > On Sun, Dec 4, 2016 at 9:47 PM, Michal Terepeta > l.com> wrote: > > > Hi everyone, > > > > > > I've been running nofib a few times recently to see the effect of > > > some changes > > > on compile time (not the runtime of the compiled program). And I've > > > started > > > wondering how representative nofib is when it comes to measuring > > > compile time > > > and compiler allocations? It seems that most of the nofib programs > > > compile > > > really quickly... > > > > > > Is there some collections of modules/libraries/applications that > > > were put > > > together with the purpose of benchmarking GHC itself and I just > > > haven't > > > seen/found it? > > > > > > If not, maybe we should create something? IMHO it sounds reasonable > > > to have > > > separate benchmarks for: > > > - Performance of GHC itself. > > > - Performance of the code generated by GHC. > > > > > > Thanks, > > > Michal > > > > > > > > > ___ > > > ghc-devs mailing list > > > ghc-devs@haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > > > > ___ > > ghc-devs mailing list > > ghc-devs@haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > ___ > > ghc-devs mailing list > > ghc-devs@haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- > Joachim “nomeata” Breitner > m...@joachim-breitner.de • https://www.joachim-breitner.de/ > XMPP: nome...@joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F > Debian Developer: nome...@debian.org > ___ > ghc-devs mailing list > ghc-devs@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: Measuring performance of GHC
Hi, did you try to compile it with a profiled GHC and look at the report? I would not be surprised if it would point to some obvious sub-optimal algorithms in GHC. Greetings, Joachim Am Sonntag, den 04.12.2016, 20:04 + schrieb David Turner: > Nod nod. > > amazonka-ec2 has a particularly painful module containing just a > couple of hundred type definitions and associated instances and > stuff. None of the types is enormous. There's an issue open on > GitHub[1] where I've guessed at some possible better ways of > splitting the types up to make GHC's life easier, but it'd be great > if it didn't need any such shenanigans. It's a bit of a pathological > case: auto-generated 15kLoC and lots of deriving, but I still feel it > should be possible to compile with less than 2.8GB RSS. > > [1] https://github.com/brendanhay/amazonka/issues/304 > > Cheers, > > David > > On 4 Dec 2016 19:51, "Alan & Kim Zimmerman"> wrote: > I agree. > > I find compilation time on things with large data structures, such as > working with the GHC AST via the GHC API get pretty slow. > > To the point where I have had to explicitly disable optimisation on > HaRe, otherwise the build takes too long. > > Alan > > > On Sun, Dec 4, 2016 at 9:47 PM, Michal Terepeta l.com> wrote: > > Hi everyone, > > > > I've been running nofib a few times recently to see the effect of > > some changes > > on compile time (not the runtime of the compiled program). And I've > > started > > wondering how representative nofib is when it comes to measuring > > compile time > > and compiler allocations? It seems that most of the nofib programs > > compile > > really quickly... > > > > Is there some collections of modules/libraries/applications that > > were put > > together with the purpose of benchmarking GHC itself and I just > > haven't > > seen/found it? > > > > If not, maybe we should create something? IMHO it sounds reasonable > > to have > > separate benchmarks for: > > - Performance of GHC itself. > > - Performance of the code generated by GHC. > > > > Thanks, > > Michal > > > > > > ___ > > ghc-devs mailing list > > ghc-devs@haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > ___ > ghc-devs mailing list > ghc-devs@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > ___ > ghc-devs mailing list > ghc-devs@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Joachim “nomeata” Breitner m...@joachim-breitner.de • https://www.joachim-breitner.de/ XMPP: nome...@joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F Debian Developer: nome...@debian.org signature.asc Description: This is a digitally signed message part ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: Measuring performance of GHC
Nod nod. amazonka-ec2 has a particularly painful module containing just a couple of hundred type definitions and associated instances and stuff. None of the types is enormous. There's an issue open on GitHub[1] where I've guessed at some possible better ways of splitting the types up to make GHC's life easier, but it'd be great if it didn't need any such shenanigans. It's a bit of a pathological case: auto-generated 15kLoC and lots of deriving, but I still feel it should be possible to compile with less than 2.8GB RSS. [1] https://github.com/brendanhay/amazonka/issues/304 Cheers, David On 4 Dec 2016 19:51, "Alan & Kim Zimmerman"wrote: I agree. I find compilation time on things with large data structures, such as working with the GHC AST via the GHC API get pretty slow. To the point where I have had to explicitly disable optimisation on HaRe, otherwise the build takes too long. Alan On Sun, Dec 4, 2016 at 9:47 PM, Michal Terepeta wrote: > Hi everyone, > > I've been running nofib a few times recently to see the effect of some > changes > on compile time (not the runtime of the compiled program). And I've started > wondering how representative nofib is when it comes to measuring compile > time > and compiler allocations? It seems that most of the nofib programs compile > really quickly... > > Is there some collections of modules/libraries/applications that were put > together with the purpose of benchmarking GHC itself and I just haven't > seen/found it? > > If not, maybe we should create something? IMHO it sounds reasonable to have > separate benchmarks for: > - Performance of GHC itself. > - Performance of the code generated by GHC. > > Thanks, > Michal > > > ___ > ghc-devs mailing list > ghc-devs@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: Measuring performance of GHC
I agree. I find compilation time on things with large data structures, such as working with the GHC AST via the GHC API get pretty slow. To the point where I have had to explicitly disable optimisation on HaRe, otherwise the build takes too long. Alan On Sun, Dec 4, 2016 at 9:47 PM, Michal Terepetawrote: > Hi everyone, > > I've been running nofib a few times recently to see the effect of some > changes > on compile time (not the runtime of the compiled program). And I've started > wondering how representative nofib is when it comes to measuring compile > time > and compiler allocations? It seems that most of the nofib programs compile > really quickly... > > Is there some collections of modules/libraries/applications that were put > together with the purpose of benchmarking GHC itself and I just haven't > seen/found it? > > If not, maybe we should create something? IMHO it sounds reasonable to have > separate benchmarks for: > - Performance of GHC itself. > - Performance of the code generated by GHC. > > Thanks, > Michal > > > ___ > ghc-devs mailing list > ghc-devs@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Measuring performance of GHC
Hi everyone, I've been running nofib a few times recently to see the effect of some changes on compile time (not the runtime of the compiled program). And I've started wondering how representative nofib is when it comes to measuring compile time and compiler allocations? It seems that most of the nofib programs compile really quickly... Is there some collections of modules/libraries/applications that were put together with the purpose of benchmarking GHC itself and I just haven't seen/found it? If not, maybe we should create something? IMHO it sounds reasonable to have separate benchmarks for: - Performance of GHC itself. - Performance of the code generated by GHC. Thanks, Michal ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs