> > - Transformation: they can look inside the structure of their arguments.
>
> Ok, here's where I think you don't want to go. I understand the power,

Too late. I'm going there... :)

Letting it sit in my mind for a few days, I have a couple new ideas,, or
rather, ideas I've read about elsewhere, rehashed into more perlish form.

One of the other goals of Perl6 was to allow runtime-created extensions to
the language. Yeah, there's source filters, but those are a pain to get
right, since you're dealing with a flat text representation of a
highly-hierarchical structure. Another thing, if I understand them
correctly, is that bottom-up parsers require the full grammar to be
available for the construction, whereas top-down parsers can be extended
at runtime (eg: Parse::RecDescent). Has anyone done any thinking along the
lines of how we are implementing the Perl 6 grammer?

My current thinking is to with a Parse::RecDescent type grammar for the
language, which allows runtime extensibility. (wow, what a revolutionary
concept! ;) The rules would obviously not involve translation back into
code, as lisp macros do, but to some syntax tree. What I wanted of macros
then, could simply be transformations on this syntax tree. They give us
the unevaluated arguments as part of the tree, allow us to operate on
them, and 'return' values by mutating the nodes of the tree.

Examples of transformers could be the compilation of the tree into Parrot
code, optimization of constant expressions, unrolling of hyper-operators,
and so on.

Yes, it's a more modest proposal than a full macro-ing system, since most
of it was probably part of the Perl6 plan anyway. I'm just going to chime
in and say that if I can get the ability to extend the grammar on the fly
(to allow new kinds of nodes, or at least new kinds of attributes on
existing nodes), and have the ability to define tree traversals before the
compile-to-Parrot traversal occurs, then I think I will be pretty happy.

Please just try and make the Perl6 tree format a little bit simpler than
B. I'm sure it wouldn't be too hard to provide:

Deparse, in case you're a masochist who loves parsing Perl code, although
there should be plenty of modules to help with that.

And Parse, which allows you to give it a scope object (some node), and
perl code, and it creates a subtree pointing to the scope it was evaluated
in.

The stringify operator on a node would run Deparse on it, and you could
easily interpolate into strings with that. (Don't shoot me, Aaron! ;)

I think these should provide all of the features macros had:
- transformation: 'nuff said.
- conditional evaluation: nothing's evaluated, and we can control
evaluation by rearranging the tree.
- multiple evaluation: put it in a while loop, or in a label-goto loop,
and this happens.
- calling environment: see Parse description above. Setting the parent of
any node would set its scope.
- saving function calls: we can take a functions contents by looking it up
in the tree, and 'inline' it by reparenting it.

A lot of this would require a decent amount of work to accomplish the goal
of macros. Fortunately, abstraction layers can be built. The 'inline' tag
can be attached as an attribute on subroutines, for example, and the
inlining code would automatically replant the codes contents. And to add
support for inline functions, all you have to do at the top of your code
is write 'use Inline'. Oh, wait, scratch that.

It might also be possible to define more generalized macros, using
these transformation tools, pull and parse the unevaluated nodes on
the tree, to achieve the same power of macros, but hidden behind
some module that provides a better interface than either you
(Aaron) or I can devise. :)

There of course, would be issues to resolve with this approach. Such as:

Scope of the transformations: Would a vector Math library's use of Inline
would cause its functions to be imported into packages that don't use
Inline? Can the calling package cold explicitly disallow inlining of any
functions?

Order of transformations: compiling to parrot bytecode should obviously
occur as one of the last transformations, but before any bytecode-level
optimizations are performed. A constant-folder check on 5*sin(2*PI)
wouldn't detect 5*X as being constant until after the sin(2*PI) has been
detected as being constant. Scheduling of transformations is a hard
problem, and the intentional programming project at MS, (a much more
ambitious project, along these transformation lines), had a system of
undoing, and repeated attempts at applying, transformations to find a
schedule that worked, on the fly. I don't believe that approach was even
close to fast enough for Perl6's purposes, however. :(

Another approach is to define coarse transformation stages, and have all
transformations be put into them. This should help a lot for
transformations which fit nicely into one of the existing categories. But
I'm not experienced enough in compiler writing to come up with even a
rough idea as to proper categories. Perhaps others have some ideas on if
this approach is workable?

Ideas, comments?
Mike Lambert

Reply via email to