It makes sense to me - since very often many of the parts that are 
processed are stand-alone and compile correctly (almost) by themselves 
(maybe a package clause to be complete in a source). These smaller pieces 
have to have one pass to grab their trees and symbols, and joined to the 
other parts. Go's compiler dodges a lot of this extra work by caching 
intermediate binary objects.

On Saturday, 2 March 2019 13:17:34 UTC+1, Jesper Louis Andersen wrote:
>
> On Thu, Feb 28, 2019 at 12:46 AM <ivan.m...@gmail.com <javascript:>> 
> wrote:
>
>> Thanks, Ian.
>>
>> I remember reading in some compiler book that languages should be 
>> designed for a single pass to reduce compilation speed.
>>
>>
> As a guess: this was true in the past, but in a modern setting it fails to 
> hold.
>
> Andy Keep's phd dissertation[0] implements a "nanopass compiler" which is 
> taking the pass count to the extreme. Rather than having a single pass, the 
> compiler does 50 passes or so over the code, each pass doing a little 
> simplification. The compelling reason to do so is that you can do cut, 
> paste, and copy (snarf) each pass and tinker much more with the compilation 
> pipeline than you would normally be able to do. Also, rerunning certain 
> simplify passes along the way tend to help the final emitted machine code. 
> You might wonder how much this affects compilation speed. Quote:
>
> "The new compiler meets the goals set out in the research plan. When 
> compared to the original compiler on a set of benchmarks, the benchmarks, 
> for the new compiler run, on average, between 15.0% and 26.6% faster, 
> depending on the architecture and optimization level. The compile times for 
> the new compiler are also well within the goal, with a range of 1.64 to 
> 1.75 times slower. "
>
> [Note: the goal was a factor 2.0 slowdown at most]
>
> The compiler it is beating here is Chez Scheme, a highly optimizing Scheme 
> compiler.
>
> Some of the reasons are that intermediate representations can be kept in 
> memory nowadays, where it is going to be much faster to process. And that 
> memory is still getting faster, even though at a slower pace than the CPUs 
> are. The nanopass framework is also unique because it has macro tooling for 
> creating intermediate languages out of existing ones. So you have many IR 
> formats in the compiler as well.
>
> In conclusion: if a massive pass blowup can be implemented within a 2x 
> factor slowdown, then a couple of additional passes is not likely to make 
> the compiler run any slower.
>
> [0] http://andykeep.com/pubs/dissertation.pdf
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to