On Wednesday, 17 September 2014 at 14:59:48 UTC, Andrei Alexandrescu wrote:
Awesome. Suggestion in order to leverage crowdsourcing: first focus on setting up the test bed such that adding benchmarks is easy. Then you and others can add a bunch of benchmarks.

On a somewhat related note, I've been working on a CI system to keep tabs on the compile-time/run-time performance, memory usage and file size for our compilers. It's strictly geared towards executing the same test case on different compiler configurations, though, so it doesn't really overlap with what is proposed here.

Right now, its continually building DMD/GDC/LDC from Git and measuring some 40 mostly small benchmarks, but I need to improve the web UI a lot before it is ready for public consumption. Just thought I would mention it here to avoid scope creep in what Peter Alexander (and others) might be working on.

Cheers,
David

Reply via email to