You can also use the fibon benchmarks to measure compilation time 
(https://github.com/dmpots/fibon). It will only collect per-benchmark compile 
times unlike nofib so it would probably be useful as a performance measurement, 
but not a way to pinpoint inefficiencies.

Since you are only interested in compile time you can use the `Test` benchmark 
size with one iteration which should run quite quickly. (e.g. `fibon-run 
--iters=1 --size=Test`).

On Oct 5, 2011, at 7:14 AM, Simon Marlow wrote:

> On 04/10/2011 18:04, David Terei wrote:
>> On 4 October 2011 06:09, Erik de Castro Lopo<[email protected]>  wrote:
>>> Ah, thats an idea. I should try to force the stage1 and stage2 compiler
>>> to go via LLVM and then see how the time is:
>>> 
>>>  - via NCG
>>>  - via LLVM before my changes
>>>  - via LLVM after my changes
>>> 
>> 
>> Yes that will work and be a large scale test. Easier though is to just
>> use the nofib benchmark 'included' with ghc.
> 
> Right, nofib will measure compile times and the nofib-analyse program can 
> compare the logs from two nofib runs and give you a summary, which includes 
> differences in compile times (both per-module and an average).
> 
> Cheers,
>       Simon
> 
> 
> _______________________________________________
> Cvs-ghc mailing list
> [email protected]
> http://www.haskell.org/mailman/listinfo/cvs-ghc
> 


_______________________________________________
Cvs-ghc mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/cvs-ghc

Reply via email to