Anton, I think you have almost everything here
at least slightly wrong:

On Sun, 2009-09-06 at 03:47 -0400, Anton van Straaten wrote:

> Macros are fundamentally about transforming code prior to its ultimate 
> evaluation.  

With that use of the word "fundamentally", you are
really begging the question.   Other views of macros
exist and have some advantages.

The view I'm most fond of is that found in SCM:

At its core, SCM is a graph-code interpreter for
a kind of scheme-like, sub-scheme language.  It
is essentially a classic 3-register machine (code,
env, and accumulator).

Some opcodes yield newly constructed graphs for
evaluation. Other opcodes do in-situ graph rewriting.
These opcodes are, in this context, what macros are
fundamentally about.   Standard Scheme syntax is 
implemented as macros that incrementally rewrite
programs into low-level graph code on-the-fly, on-demand.

Far from being ad hoc, you can read off a simple, 
clean operational semantics from the SCM source.  I
doubt it would be hard to show some pretty convincing
equivalences between that operational semantics and
the denotational semantics.

Partly in Guile and later in short-lived Guile
spin-off called Systas I started to improve the
"reflectiveness" of the system by, yes, exposing
first class environments and locatives along side
the primitives for graph rewriting.

The result is that you have a small set of very 
general primitives yielding both a practical
implementation and an amazingly rich language (richer
than standard Scheme, by far).

As an example, let's suppose that you want a
programming environment that marries standard Scheme
to, say, a pure, lazy, normal order lambda calculus
or an SKI machine or such.   Perhaps I want to, say,
write the UI for my math tool in an imperative style
but the theorem prover part as a combinator calculus.
Well, darn, you've got a graph rewriting engine right
there - built in and seamlessly integrated.

Is that a more "fundamental" view of macros?  As
I say, I'm a little weirded out by tossing around the
word "fundamental" here but there's a nice argument
for relative fundamental-ness:

Given the clean, simple operational model and 
denotational model you can give for something like SCM,
you can straightforwardly implement and prove that
you've implemented hygienic macros and that they have
all the nice mathematical properties we're familiar
with.   A deeper bass note has been struck, in that sense.
A more general set of simple primitives has been identified.

The converse direction doesn't work so well.  I don't see 
any way that you get from the facilities of R5 to R6 to
the computational richness of something like what's found
in the guts of SCM.


> No amount of clever hacking can change this. 

Some has, afaict.

>  Conflating 
> the transformation phase with the evaluation phase, which is what macros 
> defined in terms of first class environments do, makes code harder to 
> reason about. 

That depends entirely upon what theorems you
are trying to prove, what analytic framework you
are using, and what code you are looking at.

Strictly phased, hygienic macros *make it easier to
prove certain theorems useful for optimizing compilation*.

Graph-rewriting macros and first class environments and
locatives make it easy to float specialized top level
environments characterized by strictly phased hygienic
macros.


>  Simply having ordinary code use first-class environments 
> makes things harder to reason about, since it reduces the number of 
> guarantees that can be made about the meaning of the code.

Um... two things are wrong there.  One is that we don't
have and probably should not bother to try to seek
any definition of "ordinary code".  You're begging the
question, there.

Second, first-class environments DO NOT reduce 
the number of guarantees about code WHICH DOES NOT
CAPTURE THEM.   It changes - adds or subtracts is 
not the right idea - what guarantees can be made about
code which does capture them.



> Macro systems like syntax-case let you manipulate environments at 
> transformation time, 

That's a sloppy way of speaking in-so-far as the
environments you are talking about do not exist
at transformation-time.


> which is when macro systems should let you 
> manipulate environments.  But there's no good reason for the internals 
> of that manipulation to spill over to runtime. 

Question begging.


>  In particular, you 
> really don't want to have runtime code depending on further code 
> transformations in order to function correctly, unless your goal is to 
> write obfuscated programs.

Egregious question begging.

Perfectly lucid programs can be written using first class
environments and on-the-fly-rewriting macros.  There are
ample existence proofs.



> Of course, there are situations where first class environments can be 
> useful.  Nothing stops them from being added as an independent 
> abstraction, though, and a number of Schemes do that, the point being 
> to only pay the cost for them when you use them. 

I think that this is really your main point and 
there is an increment of truth to it but I think also
some wrongheadedness.  And you're not alone in this 
wrongheadedness - this runs deep in how the community
of discourse around Scheme is making a fetish of a
standard rather than thinking clearly about what a 
Scheme standard should do:

How about this, instead:

The "small scheme" standard should describe a tiny,
interpreted dialect that is highly reflective and
has a clear and definite semantics.  A critical 
property of this tiny interpreted dialect is that its
semantics and capabilities are sufficiently rich 
that every more conservative Scheme environment can
be modeled in a natural way.

Then we can do things like formally specify hygienic
macros or modules as programs in that core dialect.

We don't demand that every implementation worthy of
the name Scheme implement the full tiny core.  Heck,
I'm still of a mind that "eval" should be optional,
nevermind first-class environments!   We can identify
other sub-dialects of high utility (e.g., "compiled 
Scheme") and some implementations will support only
certain sub-dialects.

Then we have, in "small Scheme", a nice mathematical
theory of all things Scheme that is ALSO more or less
a direct description of perfectly practical, useful
direct and complete implementation (like the SCM-family
of implementations).

I think that approach can also help to energize the
implementation community a bit as people work to make
hybrid implementations that combine a full interpreter
with various specialized compilers.   There might even
be some motivation there to standardize an FFI against
which compilers emit code.

I think that approach can also help pedagogy because
the operational (SCM-style) semantics of a tiny Scheme
is a concrete thing that noob students with a modicum
of talent can "get" which, therefore and then, hands
them the keys to the castle - gives them a language in
which to understand tangibly what is going on in more
specialized environments.

I think that approach can also help us stay closer
(in a good ways) to the roots of Scheme.   The literature
makes it look like Scheme really took off in the first
place because of new insights that brought together 
simple yet very general mathematical models of computation
with simple implementation techniques.





>  That cost would 
> otherwise be paid not just in terms of machine optimization, but also 
> human understanding of code.

Think of the children?  Won't someone please think
of the children?

-t



_______________________________________________
r6rs-discuss mailing list
[email protected]
http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss

Reply via email to