Leopold Toetsch writes:
> This term came up in a recent discussion[1]. But I'd like to give this
> term a second meaning.

Except what you're talking about here is premature *optimzation*.
You're expecting certain forms of the opcodes to be slow (that's the
pessimization part), but then you're acutally using opcodes that you
think can be made faster.  This is a classic Knuth example.

If I were Dan, here's what I'd say (though perhaps I'd say it in
somewhat a different tone :-):  don't worry about it.  Inline caching is
just another optimization.  We need to be feature complete.

Leo, speed is never off your mind, so we can be fairly certain that we
won't be making any decisions that are going to bite us speed-wise.
Plus, at this point, we can change interfaces (when we go past feature
complete into real-life benchmarked optimization).  Yeah, the compilers
will have to change, but really that's not a big issue.  I've been
writing compilers for parrot for a while, and stuff is always changing,
and it usually means one or two lines of code for me if I designed well.

And if you really feel like you need to optimize something, look into
the lexicals-as-registers idea you had to fix continuation semantics.
Then you can work under the guise of fixing something broken.

Luke

> 
> During design considerations we (including me of course too) tend to 
> discard or pessimize ideas early, because they look inefficient. Recent 
> examples are e.g.
> - spilling
> - lexical (re-)fetching
> - method calls
> 
> We have even some opcodes that work around the estimated bad effect of 
> the operation. E.g. for attribute access:
> 
>   classoffset Ix, Pobj, "attr"
>   getattribute Pattr, Pobj, Ix
> 
> instead a plain and clear, single opcode:
> 
>   gettatribute Pattr, Pobj, "attr"
> 
> We are thinking that the former is more efficient because we can cache 
> the expensive hash lookup in the index C<Ix> and do just integer math to 
> access other attributes. (This is BTW problematic anyay, as the side 
> effect of a getattribute could be a change to the class structure).
> 
> Anyway, a second example is the indexed access of lexicals additionally 
> to the named access. Again a premature pessimization with opcode support.
> 
> Another one is the "newsub" opcode, invented to be able to create the 
> subroutine object (and possibly the return continuation) outside of a 
> loop, where the function is called.
> 
> All of these pessimized operations have one common part: one or several 
>  costy lookups with a constant key.
> 
> One possible solution is inline caching.
> 
> A cache is of course kind of an optimization. OTOH when we know that we 
> don't have to pay any runtime penalty for certain operations, possible 
> solutions are again thinkable that would otherwise just be discarded as 
> they seem to be expensive.
> 
> I'll comment on inline caching in a second mail.
> 
> leo
> 
> [1] Michael Walter in Re: [CVS ci] opcode cleanup 1 . minus 177 opcodes
> I'm fully behind Dan's answer - first fix broken stuff, then optimize. 
> OTOH when compilers already are choking on the source it seems that this 
> kind of pessimization isn't really premature ;)
> 

Reply via email to