This term came up in a recent discussion[1]. But I'd like to give this term a second meaning.

During design considerations we (including me of course too) tend to discard or pessimize ideas early, because they look inefficient. Recent examples are e.g.
- spilling
- lexical (re-)fetching
- method calls


We have even some opcodes that work around the estimated bad effect of the operation. E.g. for attribute access:

  classoffset Ix, Pobj, "attr"
  getattribute Pattr, Pobj, Ix

instead a plain and clear, single opcode:

  gettatribute Pattr, Pobj, "attr"

We are thinking that the former is more efficient because we can cache the expensive hash lookup in the index C<Ix> and do just integer math to access other attributes. (This is BTW problematic anyay, as the side effect of a getattribute could be a change to the class structure).

Anyway, a second example is the indexed access of lexicals additionally to the named access. Again a premature pessimization with opcode support.

Another one is the "newsub" opcode, invented to be able to create the subroutine object (and possibly the return continuation) outside of a loop, where the function is called.

All of these pessimized operations have one common part: one or several costy lookups with a constant key.

One possible solution is inline caching.

A cache is of course kind of an optimization. OTOH when we know that we don't have to pay any runtime penalty for certain operations, possible solutions are again thinkable that would otherwise just be discarded as they seem to be expensive.

I'll comment on inline caching in a second mail.

leo

[1] Michael Walter in Re: [CVS ci] opcode cleanup 1 . minus 177 opcodes
I'm fully behind Dan's answer - first fix broken stuff, then optimize. OTOH when compilers already are choking on the source it seems that this kind of pessimization isn't really premature ;)




Reply via email to