On Nov 13, 2019, at 12:39 PM, Claes Redestad <claes.redes...@oracle.com> wrote: > Similar to other promising candidates like constant folding I think it > opens up some more general questions about observability. For example: > do we retain the original bytecode and mappings everything back to it > so that a debugger would be none the wiser about any inlining that may > have happened? Do we "simply" deoptimize and redefine everything back to > the stashed original when someone hooks in a debugger..? (Do we go so > far as to allow opting out of debuggers entirely?)
Yep. These questions are why link-time optimization is not a slam dunk. If it were, I suppose we’d have it already. Debugging is especially hard. We don’t want to give it up, but any program transformation is going to make debugging harder to implement. You should see what the JITs do to support deopt; like exact GC, it’s something that makes them different animals from the usual llvm/gcc compilers. That said, a bytecode-to-bytecode “deoptimization” is easier to specify and think about than a machine-code-to-bytecode deopt, which is what the JITs support. Juicy problems everywhere! Also I wonder if stashing classifies for debugging only will motivate a partial resurrection of pack200, for storing said things “cold” on disk.