Austin Hastings wrote:

> [... code example ...]

Good point to raise, but I'm not sure about your conclusion.

12 and 13 don't exist *in registers,* but they do certainly do exist at
various points: in the original source, in the AST, and in the
unoptimized PASM (if any). The registers were optimized away because the
values are statically knowable. So the variables could be resurrected,
but the variable-to-register mapping would need to be adjusted. So the
optimizer could just emit "temp_a = constant 12" instead of "temp_a =
I8" in its metadata for the sequence point.

So my conclusion is not that this optimization is impossible, but that
register file consistency isn't enough: High-level variables need to be
brought into a findable state. If straight PBC/PASM (not IMCC) were
being optimized, it would be the contents of the parrot register file
(as written) that must be findable, even though the registers might have
been reallocated. By contrast, when IMCC was being optimized (as in your
example), then its variables are much more like C variables than they
are like registers.

The level at which consistency is achieved at sequence points must be
the same level at which the speculative optimization was performed.
parrot just provides a particularly large number of intermediate
representations, and thus a large number of levels at which said
speculative optimizations might be performed: Perl 6 ==parser=> AST
==compiler&optimizer=> IMCC ==compiler&optimizer=> PASM ==assembler=>
PBC ==compiler&optimizer=> machine code. So eek. But any optimizer in
that chain which doesn't attempt speculative optimizations wouldn't have
to worry about them.


> 2- In the "living dangerously" category, go with my original
> suggestion: GC the compiled code blocks, and just keep executing what
> you've got until you leave the block.

Now, maybe when the invalidated compilation is set loose, parrot could
check whether it's on the stack and just flat out emit a warning if so,
letting execution of the routine continue even knowing that it might now
misbehave. But the programmer will have been informed of it through the
warning, so it's kindof okay--so long as the optimizations are applied
in a consistent manner. i.e., It is^Wwould be unacceptable that^Wif
these crashes^Wwarnings only appear after serving a web page 100,000
times, when HotSpot^Wparrot finally decides to attempt a
heavily-optimized compile of your jsp^WMason component.

This strategy makes some amount of sense. Rewriting optimized stack
frames is a VERY hard problem--Java 1.4.x provides prior art of that,
demonstrating a very long period of instability after HotSpot was
introduced. It is VERY difficult to exercise the code thoroughly, has
plenty of opportunity to make everything crash, and it's a lot of work
(and a lot of code) for something which probably doesn't affect much
"good" code anyhow.

How much work is it actually worth to solve this problem, rather than
giving the programmer (a) enough information to isolate and diagnose the
side-effects and (b) pragmas to turn off the optimizations when needed?
The advantages of speculative optimizations and dynamism can both
*easily* retained if some (relatively minor?) caveats are accepted.

Let's compare that to Perl 5.

The first example was of an inlined sub returning a constant. This
strategy is better than perl 5 would do--perl 5 would just emit a
warning and continue to use the old inlined value.

    Advantage: parrot

My example was a method-call-becomes-[inlined]-function-call
optimization, like that one in HotSpot that Sun's so proud of. For this,
perl 5 does better: It would never attempt the optimization, and thus
would always behave correctly.

    Advantage: perl 5

Your example is overloading the infix "+" operator in mid-stream. Again,
Perl 5 doesn't attempt the optimization, so always does the right thing.

    Advantage: perl 5

So not entirely clear-cut, although perl 5 does provide (very) limited
prior art of the (limited) acceptability of side-effects due to
speculative optimizations.


> Arguably, sequence points could be used here to partition the blocks
> into smaller elements.

(Breaking code fragments down at sequence points creates a lot more
memory fragmentation, reduces locality, adds memory allocation overhead,
and complicates branching from a standard PC-relative branch to, what, a
PC-relative load + register indirect branch? Ick. I don't think branch
history caches would be very happy.)

Sounds like Dan's not keen on sequence points in the first place, since
sequence points prohibit code motion optimizations. Assuming that
code-motion optimizations take precedence over speculative
optimizations, then stack frame re-writing is impossible within that
framework, and this entire class of speculative optimizations either (a)
must not be implemented, (b) must check before proceeding with the
optimized path [and that check may be more expensive than not performing
the optimization in the first place], or (c) might allow the program to
behave not as written. Any other options?

Option (c) as discussed above might just be okay so long as
optimizations are applied predictably and diagnostics are sufficient.
It's good enough for perl 5 in limited circumstances. On the other hand,
method calls are much more common than inlining constant subroutines: It
might not be good enough for parrot, depending upon the speculative
optimizations in question.

--
 
Gordon Henriksen
IT Manager
ICLUBcentral Inc.
[EMAIL PROTECTED]


Reply via email to