On 2019-02-08 23:01, George Neuner wrote:
On Fri, 8 Feb 2019 08:37:33 -0500, Matthias Felleisen
<matth...@felleisen.org> wrote:


On Feb 6, 2019, at 3:19 PM, George Neuner <gneun...@comcast.net> wrote:


The idea that a compiler should be structured as multiple passes each
doing just one clearly defined thing is quite old.  I don't have
references, but I recall some of these ideas being floated in the late
80's, early 90's [when I was in school].

Interestingly, LLVM began (circa ~2000) with similar notions that the
compiler should be highly modular and composed of many (relatively
simple) passes.  Unfortunately, they quickly discovered that, for a C
compiler at least, having too many passes makes the compiler very slow
- even on fast machines.  Relatively quickly they started combining
the simple passes to reduce the running time.


I strongly recommend that you read the article(s) to find out how
different nanopasses are from the multiple-pass compilers, which
probably date back to the late 60s at least. — Matthias

I did read the article and it seems to me that the "new idea" is the
declarative tool generator framework rather than the so-called
"nanopass" approach.

The distinguishing characteristics of "nanopass" are said to be:

 (1) the intermediate-language grammars are formally specified and
     enforced;
 (2) each pass needs to contain traversal code only for forms that
     undergo meaningful transformation; and
 (3) the intermediate code is represented more efficiently as records


IRs implemented using records/structs go back to the 1960s (if not
earlier).


Formally specified IR grammars go back at least to Algol (1958). I
concede that I am not aware of any (non-academic) compiler that
actually has used this approach: AFAIAA, even the Algol compilers
internally were ad hoc.  But the *idea* is not new.

I can recall as a student in the late 80's reading papers about
language translation and compiler implementation using Prolog
[relevant to this in the sense  of being declarative programming]. I
don't have cites available, but I was spending a lot of my library
time reading CACM and IEEE ToPL so it probably was in one of those.


I'm not sure what #2 actually refers to.  I may be (probably am)
missing something, but it would seem obvious to me that one does not
write a whole lot of unnecessary code.

The article talks about deficiencies noted with various support tools
and drivers that were provided to aid students in implementing
so-called "micropass" compilers, but who wrote those tools?  Not the
students.  If there was much superfluous code being used or generated,
well whose fault was that?

Aside: I certainly could see it being a reaction to working with Java
where tree walking code has to be contorted to fit into the stupid
multiple dispatch and vistor patterns.


YMMV, (and it will)
George

Could nanopass, at least in theory, fuse multiple (or even all) passes into one at compile time. To create a very efficient compiler which is also logically broken down and readable in the source code?

--
You received this message because you are subscribed to the Google Groups "Racket 
Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to