Hi,
I see the following challenges here, which have partially be touched
by the discussion in the mentioned proposal.
- The tests we are looking at, might be quite time intensive (lots of
modules that take substantial time to compile). Is this practical to
run when people locally execute
Michal Terepeta writes:
> Interesting! I must have missed this proposal. It seems that it didn't meet
> with much enthusiasm though (but it also proposes to have a completely
> separate
> repo on github).
>
> Personally, I'd be happy with something more modest:
> - A
Michal Terepeta writes:
> Hi everyone,
>
> I've been running nofib a few times recently to see the effect of some
> changes
> on compile time (not the runtime of the compiled program). And I've started
> wondering how representative nofib is when it comes to measuring
I made #12930 to track this.
Matt
On Fri, Dec 2, 2016 at 11:22 PM, Joachim Breitner
wrote:
> Hi,
>
> again, Travis is failing to build master since a while. Unfortunately,
> only the author of commits get mailed by Travis, so I did not notice it
> so far. But usually,
On Mon, Dec 5, 2016 at 12:00 PM Moritz Angermann
wrote:
> Hi,
>
> I’ve started the GHC Performance Regression Collection Proposal[1]
> (Rendered [2])
> a while ago with the idea of having a trivially community curated set of
> small[3]
> real-world examples with
Hi,
I’ve started the GHC Performance Regression Collection Proposal[1] (Rendered
[2])
a while ago with the idea of having a trivially community curated set of
small[3]
real-world examples with performance regressions. I might be at fault here for
not describing this to the best of my abilities.
If not, maybe we should create something? IMHO it sounds reasonable to have
separate benchmarks for:
- Performance of GHC itself.
- Performance of the code generated by GHC.
I think that would be great, Michael. We have a small and unrepresentative
sample in testsuite/tests/perf/compiler
Simon