On Thu, Sep 01, 2005 at 17:12:51 +0000, Luke Palmer wrote:
> On 9/1/05, Yuval Kogman <[EMAIL PROTECTED]> wrote:
> > On Wed, Aug 31, 2005 at 13:43:57 -0600, Luke Palmer wrote:
> > > Uh yeah, I think that's what I was saying.  To clarify:
> > >
> > >     sub foo (&prefix:<+>) { 1 == 2 }    # 1 and 2 in numeric context
> > >     foo(&say);   # nothing printed
> > >
> > > But:
> > >
> > >     sub foo (&prefix:<+>) { +1 == +2 }
> > >     foo(&say);    # "1" and "2" printed
> > >
> > > Luke
> > 
> > Furthermore, even if:
> > 
> >         sub &infix:<==> ($x, $y) { +$x == +$y }
> >         sub foo (&prefix:<+>) { 1 == 2 }
> > 
> >         foo(&say); # nothing printed
> > 
> > but if
> > 
> >         sub foo (&*prefix:<+>) { 1 == 2 }
> > 
> > then what?
> 
> Um, you can't do that.  You can't lexically bind a global; that's why
> it's global. You could do:
> 
>     sub foo (&plus) { temp &*prefix:<+> = &plus;  1 == 2 }
> 
> And then something would be printed... maybe, depending on the
> implementation of == (with your implementation above, it would).

There are some possible problems with going this far...

From an optimizing linker perspective the way to deal with this
relies very heavily on inferrencing possible dynamic scopes...

As I see it the instance of '==' compiled into that foo should have
it's '+' instance resolved, and inferred into the same symbol as the
temp container that 'foo' is creating, so that in effect it's
prebound to the value that isn't there yet.

Furthermore, value inferrencing can let us optimize and hard code
things like

        foo(&prefix<+>); # the original

to mean exactly the same thing as

        sub foo () { 1 == 2 }

which can then simply reduce the whole call to 'foo' into false.

But...

The things that make this hard:

        * what is the definition of &infix:<==>?
        * what is the definition of &prefix:<+>?
        * where does the line between perl 6 and "builtin" go?
        * for the sake of completeness is all of perl 6 defined in perl 6,
        * except when inferrence makes it safe to assume that something
        can be done with a runtime primitive? This is usually the case.
        * what happens if 'eval "temp $opnaame = sub { 'moose' }"'? I
        think in this case inferrencing based assumptions are unwinded
        and redone.

These behaviors are more than just optimizations, but define the
behavior of the language WRT to runtime modification of the
definition of everything.

As I see it the correct solution is that perl 6 is known to infer
everything and anything as much as possible for reasons of
performance.

If the user annotated types, used traits like 'is read only', or
asked the runtime to close classes, make the metamodel read only, or
disable eval then inferrence can take advantage of this. In any
case, if inferrence can detect a *guaranteed* error, this should be
a compile time error. If the inference engine can detect a possible
error, it should be a user option to make this emit a warning, an
error, or nothing. This choice should be global via command line
switches, as well as lexical via pragmas, with command line switches
overriding everything for the whole program, or just a certain
package or file.

From then, code inferred to be "staticish" can be compiled to use
"native" functions, as provided by the runtime.

At least some of the code must comform to this, or the runtime
cannot execute the PIL because there is no point to get things going
- the definition is circular.

The inferrence is optimistic in the sense that it assumes that
things will not be mucked up at runtime.

Bindings of operators and subs to different values should be an
inferrable thing.

However, runtime loading of unknown code via eval or require (which
are really the same thing) or runtime imports cause the inferrencing
engine to unwind to the last consistent point, and resume from
there.

If errors are found at this stage they are reported via the same
policy as the compile time options - as early as possible, as a
warning, error, or nothing at all.

An implementation strategy for making this work is that all the
operators, functions and whatnot that a bit of code is looking up
and calling become implicit parameters to this bit of code.

From there the call chains are traced from the root of the program
to the lowest level, and implicit parameters are optimized away
where they can be.

Where they cannot, the code is just left as is, with the operator in
question passed in from the calling code, with any side effects like
temporary binding having their effect on the parameter directly.

Symbols that are lexically orthogonal to the call chain, like this:

        my &prefix:<+>;
        sub bar (&foo) { &prefix:<+> := &foo };
        sub moose { 1 == 2 }

don't matter, since the overridden &prefix:<+> is outside of the
scope where == was defined.

Symbols that span multiple call chains like this:

        our &*prefix:<+>;
        sub bar (&foo) { &prefix:<+> := &foo };
        sub moose { 1 == 2 }

simply make everything butt slow because this prefix:<+> is stuck
into every possible use of forced numeric context. &foo is the only
way out, and it must explicitly refer to the core &prefix:<+> if it
wishes to retain the numifying semantics at all, and it may do this
by lexically binding the old &prefix:<+> to it's own container, at
the time of it's definition.

The effect of closures also relates to this - a closure is just a
code ref .assuming() it's indeterminate parameters, which were
resolved at runtime.

-- 
 ()  Yuval Kogman <[EMAIL PROTECTED]> 0xEBD27418  perl hacker &
 /\  kung foo master: /methinks long and hard, and runs away: neeyah!!!

Attachment: pgpZU3fyL6phR.pgp
Description: PGP signature

Reply via email to