I've just been working the whole day on the OPAL decompiler, getting it almost to run (already works for various methods). Now ...

could someone please explain to me why the current implementation of closures isn't a terribly bad idea?

I can see why it would pay off for Lisp programmers to have closures that run like the Pharo closures, since it has O(1) access performance. However, this performance boost only starts paying off once you have at least more than 4 levels of nested closures, something which, unlike in LISP, almost never happens in Pharo. Or at least shouldn't happen (if it does, it's probably ok to punish the people by giving them slower performance).

This implementation is pretty hard to understand, and it makes decompilation semi-impossible unless you make very strong assumptions about how the bytecodes are used. This then again reduces the reusability of the new bytecodes and probably of the decompiler once people start actually using the pushNewArray: bytecodes.

You might save a teeny tiny bit of memory by having stuff garbage collected when it's not needed anymore ... but I doubt that the whole design is based on that? Especially since it just penalizes the performance in almost all possible ways for standard methods. And it even wastes memory in general cases. I don't get it.

But probably I'm missing something?

cheers,
Toon

Reply via email to