On Fri, 2 Aug 2002, Jerome Vouillon wrote:

> On Thu, Aug 01, 2002 at 10:57:34AM -0700, Sean O'Rourke wrote:
> > My naive implementation would have an array of hashes for each sub, with
> > one entry for each level of scope within.
>
> I would use an array of arrays or a linked-list of arrays.  This is
> hardly more difficult to implement (you just need to keep track of the
> variable indices during compilation) and the generated code will be a
> lot more efficient.

I don't see how you can cope with '%MY' unless you have a hash.  You could
have a hash in addition to the array, I suppose.  In any case, Dan has
promised us some sort of indexed "back-door" access to our hashes, so I
get the feeling the point is moot -- hashes will be arrays in some sense.

> > This is just like the
> > pad_stack, but implemented using only language features.  (Dynamic)
> > lexical lookup works its way through this stack.  Static lookups can
> > probably be resolved to registers, with code inserted to fetch values out
> > of the pad the first time they are used, and store them back when a scope
> > is exited.
>
> You would probably need to store them back each time you perform a
> function call.

You're right.

> I think it is good enough for the moment to always go through the
> stack.

Without performance numbers, this is hard to test, but it can potentially
turn a single "a = b + c", which is just "add P0, P1, P2" if a, b, and c
have been referenced, into a hideous five instructions:

  fetch_lex P0, 'a'     # Because how we store result depends on a's type
  fetch_lex P1, 'b'
  fetch_lex P2, 'c'
  add P0, P1, P2
  store_lex P0, 'a'

> > Returning a closure would make a copy of the scope array, containing
> > references to the pad hashes (here's where GC makes me happy, and where I
> > may make the GC unhappy).
>
> You don't need to make a copy at this point.  Instead, the copy should
> happen each time you enter a block containing lexical variables.  (You
> should also allocate and fill a new pad at this time.)

Making a copy on every scope entry would solve the problem (the Befunge
stack-stack metaphor grows stronger!), but I was hoping we could do
better, since most lexical scopes won't be creating closures.  The more I
think about it, though, the more complicated that gets.  And actually,
thanks to %MY and caller.MY, we need a new pad even for scopes without
lexicals, since they can be inserted later.

> > Waving my hands somewhat more quickly, as an optimization, lexicals can
> > just live in registers when the compiler sees no references to %MY or
> > closures.  Nastiness like eval and caller.MY can be handled by inserting
> > fixup code to reconstruct the lexical array on demand.
>
> caller.MY is especially nasty because you have to pay for it even when
> you don't use it: you can't know whether a function has to support it
> just by looking at the function code.  It will be really tricky to
> implement any optimization if one can write things like:
>   caller(1).MY{'$y'} := $x

Yes.  Not impossible, I think, but more likely just tricky enough that we
never get around to it.

/s

Reply via email to