At 09:34 AM 2/12/2001 -0800, Jan Dubois wrote:
>On Mon, 12 Feb 2001 16:13:10 +0000, Simon Cozens <[EMAIL PROTECTED]> wrote:
>
> >How much can we do in the compiler, and how much can we do in the
> >interpreter? If we're having cached bytecode, it makes sense to do
> >as much optimization as we can in the compiler. If not, we might
> >as well brute force things to conserve compilation time. Or should
> >this be user-selectable with "use less"?
> >
> >I'm still thinking about things like type inferencing to reduce the
> >number of conversions that need to be done at runtime.
>
>I guess it depends on how the bytecode is being used.  I know that the JIT
>backend guys typically *don't* want the fronends to optimize too heavily
>because that increases the number of patterns dramatically that they have
>to optimize in the JIT phase again.  If the bytecode is already
>specialized for a target environment then having all optimizations in the
>compile phase might make sense.

We can always have the version with the Java backend run with minimal 
optimizations...

>Another thing to keep in mind: It might debugging easier if you optionally
>have access to the unoptimized bytecode at runtime.  The same goes for
>generating error messages (what again is the name of the variable that
>isn't initialized here?).

When debugging you'll typically run without much in the way of 
optimizations. If you run a program with heavy optimization and strip the 
bytecode for speed/space reasons, you'll get less in the way of help from 
the interpreter. That's OK, though, since you did, after all, in this case 
tell the interpreter you didn't need the help.

                                        Dan

--------------------------------------"it's like this"-------------------
Dan Sugalski                          even samurai
[EMAIL PROTECTED]                         have teddy bears and even
                                      teddy bears get drunk

Reply via email to